T. McGrath Posted December 29, 2017 Posted December 29, 2017 11 hours ago, EdEarl said: AlphaGo requires about the same programming for a game as a person, explain the rules to a person and program these same rules for AlphaGo. Strategy is learned by the AlphaGo AI the same as a person learns, by playing many games. Closer than many realize. The ability to learn is not an indication of intelligence, just clever programming. Intelligence begins when you apply what you have learned, and more than to just one thing. When you can show me a program that can play Chess/Go, drive me to work in congested traffic, and diagnose any medical problems I might have - without having to reprogram - then you will have achieved artificial intelligence, but doing just one thing (no matter how well) doesn't cut it. 3 hours ago, Strange said: Are you suggesting that humans are able to play without being told the rules? If not, what are you suggesting? Note that go is notoriously difficult because knowing the rules (which are extremely simple: you take turns to place stones on empty positions and capture an opponent's stone by surrounding it) doesn't tell you how to win. I'm not convinced that the Turing test, in itself, is that good a test. But some refinement of it could be. There are a number of systems that are claimed to have passed it. For example: http://www.bbc.com/news/technology-27762088 and http://www.zdnet.com/article/mits-artificial-intelligence-passes-key-turing-test/ Of course one can argue about whether they really passed, was the test carried out correctly, etc. But that is one of the problems with this asa test. It is subjective and so any conclusion can be rejected for some reason. I'm saying that developing an application that does just one thing, no matter how well it does it, is not artificial intelligence. It is an Expert System. MIT has been trying to beat the Turing test since the 1960s, and failing. So I'm not surprised to see in their desperation that they made up their own test, which they could pass, and then misassociated it with Alan Turing. I agree with you that some refinement of the Turing test could be in order, but the rules/conditions of the test would have to be established first. Not after-the-fact, as is prone to happen with the media. The goal is not to fool the observer, but rather to make it so the observer cannot distinguish between human intelligence and artificial intelligence. The problem is that there is a subjective component to this test.
dimreepr Posted December 29, 2017 Posted December 29, 2017 4 minutes ago, T. McGrath said: The ability to learn is not an indication of intelligence, just clever programming. Intelligence begins when you apply what you have learned, and more than to just one thing. When you can show me a program that can play Chess/Go, drive me to work in congested traffic, and diagnose any medical problems I might have - without having to reprogram - then you will have achieved artificial intelligence, but doing just one thing (no matter how well) doesn't cut it. 1 Since it has been shown that all these things have been achieved, your objection is just a matter of (more powerful computers) time. Let's get back to the OP.
T. McGrath Posted December 29, 2017 Posted December 29, 2017 Just now, dimreepr said: Since it has been shown that all these things have been achieved, your objection is just a matter of (more powerful computers) time. With the advent of quantum computers, it may yet be achievable within the next generation or two. I certainly have not ruled out the possibility of developing artificial intelligence. I just don't think we are anywhere close ... yet. We are still developing software that is capable of only doing one thing, and that is not artificial intelligence. I don't care if you want to call it a "neural network," or "heuristic algorithms", it is still only an Expert System until it can do more than one thing without being reprogrammed.
dimreepr Posted December 29, 2017 Posted December 29, 2017 2 minutes ago, T. McGrath said: With the advent of quantum computers, it may yet be achievable within the next generation or two. I certainly have not ruled out the possibility of developing artificial intelligence. I just don't think we are anywhere close ... yet. We are still developing software that is capable of only doing one thing, and that is not artificial intelligence. I don't care if you want to call it a "neural network," or "heuristic algorithms", it is still only an Expert System until it can do more than one thing without being reprogrammed. Since, as I've pointed, your list has been achieved by different computers, we don't need a quantum leap in computing to simply combine them, just a more powerful version of what we already have. Besides, I think the OP question is far more interesting than the semantics of this tangent, can we get back to it, please?
T. McGrath Posted December 29, 2017 Posted December 29, 2017 Just now, dimreepr said: Since, as I've pointed, your list has been achieved by different computers, we don't need a quantum leap in computing to simply combine them, just a more powerful version of what we already have. Besides, I think the OP question is far more interesting than the semantics of this tangent, can we get back to it, please? You can't answer the OP's question without first defining AI. So far this thread has demonstrated a wide variety of definitions.
dimreepr Posted December 29, 2017 Posted December 29, 2017 The question is, should we allow it? So we can answer it with the proviso of which definition you chose. For instance, should we allow a sentient computer? Or using your definition should we allow it? What reasons are there for disallowing either? Personally, I prefer the question if we can't tell the difference.
EdEarl Posted December 29, 2017 Posted December 29, 2017 Should we allow seems academic. Military in the US, Russia, China, etc. will develop AI as they did the bomb. Moreover, corporations will develop automated factories because it cuts costs; it is a race to lower costs which none can avoid. The limit of factory automation is one that has an automated tool and die maker that can change tooling on a factory floor to change its production from one product to another, for example Model 3 Tesla to Model Y or from car to any widget, including military gear. Essentially, the limit of 3d printing is to print a factory to make anything. The first step is lights out factories. Quote autodesk.com Probably the most well-known lights-out manufacturing facility is FANUC in Japan. At FANUC (factor automated numerical control), robots produce other robots without the presence of humans. FANUC Robotics America Vice President Gary Zywiol said about FANUC’s capabilities, “Not only is it lights out, we turn off the air conditioning and heat, too.”
dimreepr Posted December 29, 2017 Posted December 29, 2017 3 minutes ago, EdEarl said: Should we allow seems academic. 1 Indeed, so the question becomes what can we expect?
EdEarl Posted December 29, 2017 Posted December 29, 2017 1 hour ago, dimreepr said: Indeed, so the question becomes what can we expect? The cost of things with full automation (no jobs) will minimize, cost -> 0.
tuco Posted December 29, 2017 Posted December 29, 2017 3 hours ago, dimreepr said: Indeed, so the question becomes what can we expect? I would think that nobody knows yet all aspects. One aspect, I consider as a certainty, which comes from full automation as mentioned in the previous post is that society, welfare systems in particular, will have to adapt. More here: The robot that takes your job should pay taxes, says Bill Gates - https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/ As a side note, I concur that the topic, in the current format, is probably more suited for philosophy or politics section rather than for computer science.
FaithCrime Posted December 29, 2017 Author Posted December 29, 2017 (edited) whether any top university or a well known inventor/company does invent AI, it will either be a bot that'd reply in such standardized manner just like euegene goostman which not a 13 year old could ever learn to speak of unless of IQ 250. i really admire the fact that people still strive to learn such adequate and complex algorithm. but what if the reason ,why AI cannot be invented, is because we are looking deeply into computer science and not realizing such simple but very basic of computer science. And again if i were to suggest "what if" programming, it still would lead to one mind speaking. how come that in every debate theirs always ways to prove the attacker wrong, though we know AI is very much harmful, why haven't we got proof it's harmful, research say over many scientists say it's bad news because of the impact it'll, but they are the ones who tend to show deep affection on researching on it Edited December 29, 2017 by FaithCrime
EdEarl Posted December 29, 2017 Posted December 29, 2017 @FaithCrime -- I don't think AI or AGI is necessarily harmful. Why do you?
tuco Posted December 29, 2017 Posted December 29, 2017 Let's say it's not intrinsically harmful, however, because of its let's say "super-human" capabilities it could become a threat to humans? Let's say the question is: What humans can be better than AI at? I'd say nothing. So in this sense, depending on authority given to AI, it can be viewed as a threat.
EdEarl Posted December 29, 2017 Posted December 29, 2017 @tuco I agree that it is a potential threat, but so are dogs. I think AGI will consider us insignificant, and do what it pleases. Since the atmosphere will be unnecessary, and perhaps corrosive, it might launch into space and be forever gone. It might treat us as pets and take care of us. Who knows 1
tuco Posted December 29, 2017 Posted December 29, 2017 (edited) With dogs the nature of the threat is different. Dogs can harm us but cannot be superior to us. Danger coming from AI, according to some, is that it can find exploits at rate humans cannot achieve. In other words, humans cannot outsmart AI, hence the potential threat. Of course, we are in the realm of sci-fi but we can imagine a number of applications when AI would be in charge, probably the most common is military, it could endanger humans. Here we are back to Asimov Laws and such, philosophy. Personally, I am optimistic and the sooner AI and robots will overtake mundane human tasks the better because then humans will have time and energy to devote to personal growth, social matters, families or politics for example. However, changes in our society will be enormous, I would say revolutionary, and revolutions tend to carry a certain degree of risk. Edited December 29, 2017 by tuco 1
thoughtfuhk Posted January 3, 2018 Posted January 3, 2018 On 12/29/2017 at 7:48 AM, T. McGrath said: The ability to learn is not an indication of intelligence, just clever programming. Intelligence begins when you apply what you have learned, and more than to just one thing. When you can show me a program that can play Chess/Go, drive me to work in congested traffic, and diagnose any medical problems I might have - without having to reprogram - then you will have achieved artificial intelligence, but doing just one thing (no matter how well) doesn't cut it. I'm saying that developing an application that does just one thing, no matter how well it does it, is not artificial intelligence. It is an Expert System. MIT has been trying to beat the Turing test since the 1960s, and failing. So I'm not surprised to see in their desperation that they made up their own test, which they could pass, and then misassociated it with Alan Turing. I agree with you that some refinement of the Turing test could be in order, but the rules/conditions of the test would have to be established first. Not after-the-fact, as is prone to happen with the media. The goal is not to fool the observer, but rather to make it so the observer cannot distinguish between human intelligence and artificial intelligence. The problem is that there is a subjective component to this test. Deep Learning can do all the things you describe above. Deep Learning algorithms are very general, and this is why we see Deep Learning doing medical diagnosis, working in congested traffic (self driving cars) etc. Notably no human can sit down and program the billions of parameters that these Deep Learning models automatically do from scratch! These Deep Learning models are becoming more and more general by the day too. Here is a model which already somewhat combines them all: ArxiV: One Model To Learn Them All
dimreepr Posted January 3, 2018 Posted January 3, 2018 4 hours ago, thoughtfuhk said: Deep Learning can do all the things you describe above. Deep Learning algorithms are very general, and this is why we see Deep Learning doing medical diagnosis, working in congested traffic (self driving cars) etc. Notably no human can sit down and program the billions of parameters that these Deep Learning models automatically do from scratch! These Deep Learning models are becoming more and more general by the day too. Here is a model which already somewhat combines them all: ArxiV: One Model To Learn Them All But as you say on your similar topic, it's still a decade off. So the question remains, what can we expect? Good, bad or indifferent and do the positives outweigh the potential negatives?
EdEarl Posted January 3, 2018 Posted January 3, 2018 48 minutes ago, dimreepr said: But as you say on your similar topic, it's still a decade off. So the question remains, what can we expect? Good, bad or indifferent and do the positives outweigh the potential negatives? I think we can expect advances on two fronts, software and hardware. The software we enjoy today is the product of various software object libraries that do many things; I believe neural net libraries are currently being developed that do various things as thoughtfunk mentioned of Deep Learning. And there are several AI engines being trained for various tasks. As hardware improves one AI engine will be capable of learning more, and multiple network objects will be combined, for example sight, hearing, motor control, smell, medical expertise, biochemistry, etc. It seems likely that a single AI network will be capable of learning all scientific knowledge, but perhaps there is some as yet unknown limit. I think the three laws of robotics may be taught an AI as any other lesson. However, whether we can compel an AI to obey them or any human law is an open question. We are working with computers, but they will be smart and may be able to circumvent anything we try to make compelling.
dimreepr Posted January 3, 2018 Posted January 3, 2018 3 minutes ago, EdEarl said: I think the three laws of robotics may be taught an AI as any other lesson. However, whether we can compel an AI to obey them or any human law is an open question. That's an oxymoron... 6 minutes ago, EdEarl said: We are working with computers, but they will be smart and may be able to circumvent anything we try to make compelling. That could only occur with sentience and that's a whole different question...
EdEarl Posted January 3, 2018 Posted January 3, 2018 2 minutes ago, dimreepr said: That's an oxymoron... That could only occur with sentience and that's a whole different question... Since we don't know exactly how we are conscience and sentient, I have doubt that we can compel an AI. What you claim seems plausible, but not I'm not convinced as you are.
dimreepr Posted January 3, 2018 Posted January 3, 2018 2 minutes ago, EdEarl said: Since we don't know exactly how we are conscience and sentient, I have doubt that we can compel an AI. What you claim seems plausible, but not I'm not convinced as you are. 2 I think, therefore I am...
EdEarl Posted January 3, 2018 Posted January 3, 2018 I am convinced we are conscious and sentient, but not convinced we know how to build an AI that is conscious and sentient. In fact, I'm pretty sure we will not know until we do it. Are we qualitatively different than other beings with brains, for example chimps, mice and ants? Or, are we only quantitatively different, just have a larger brain. Elephants don't seem to have our capabilities, is that a false impression; they have larger brains than us. Moreover, women have slightly smaller brains than men, about 11%. Yet there is no difference in IQ scores on the average. Thus, our capabilities compared to animals seem to be a qualitative difference. No one is sure of what that quality is. This ignorance makes me less convinced than you about our ability to control AI. Someone may build an AGI with consciousness and sentience without knowing. I think there may be unknown unknowns and we may not know the extent of our ignorance. 1
Endy0816 Posted January 3, 2018 Posted January 3, 2018 5 minutes ago, dimreepr said: That could only occur with sentience and that's a whole different Not all are setup to do it, but is possible for programs to find bugs within themselves.
T. McGrath Posted January 4, 2018 Posted January 4, 2018 19 hours ago, thoughtfuhk said: Deep Learning can do all the things you describe above. Deep Learning algorithms are very general, and this is why we see Deep Learning doing medical diagnosis, working in congested traffic (self driving cars) etc. Notably no human can sit down and program the billions of parameters that these Deep Learning models automatically do from scratch! These Deep Learning models are becoming more and more general by the day too. Here is a model which already somewhat combines them all: ArxiV: One Model To Learn Them All Deep Learning is a programming methodology. It isn't even a program itself. Wikipedia defines Deep Learning as "part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms." Hence, Deep Learning can do none of the things I listed, and it certainly can't pass a Turing test. 13 hours ago, EdEarl said: I think we can expect advances on two fronts, software and hardware. The software we enjoy today is the product of various software object libraries that do many things; I believe neural net libraries are currently being developed that do various things as thoughtfunk mentioned of Deep Learning. And there are several AI engines being trained for various tasks. As hardware improves one AI engine will be capable of learning more, and multiple network objects will be combined, for example sight, hearing, motor control, smell, medical expertise, biochemistry, etc. It seems likely that a single AI network will be capable of learning all scientific knowledge, but perhaps there is some as yet unknown limit. I think the three laws of robotics may be taught an AI as any other lesson. However, whether we can compel an AI to obey them or any human law is an open question. We are working with computers, but they will be smart and may be able to circumvent anything we try to make compelling. There are certainly a lot of people out there claiming they have AI. The reality is that AI does not yet exist. Everyone who claims they have created AI, hasn't. "Artificial Intelligence" is one of the most misused terms in all of computer science. Teaching a computer to learn is not intelligence. Programming a computer to do one thing, no matter how well it does it, is not artificial intelligence. I'm afraid that when artificial intelligence does eventually get invented that we won't recognize it because we will have called everything else AI instead.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now