cladking Posted April 4, 2016 Posted April 4, 2016 Perhaps if you ever provided any evidence for your claims, they might be worth considering. As it is, they can be dismissed as empty posturing. So... ...is your contention that a computer can't drive from food source to food source that it isn't hungry or that it has no little feet to glue to the pedals? Maybe if we called it "Ford" it could drive about? If we can't define "intelligence" then we wouldn't recognize it when we invented it. I would say that until a machine can expand on existing knowledge or technology that any other test is pretty much meaningless or mere semantics. Maybe you're still missing the point.
Strange Posted April 4, 2016 Posted April 4, 2016 So... ...is your contention that a computer can't drive from food source to food source that it isn't hungry or that it has no little feet to glue to the pedals? What on Earth makes you think that. Where have I made any such claim. (I'm not even sure what it means.) My only contention is that you repeatedly make grand claims about what "we" cannot know and your secret knowledge, such as your recent: "intelligence" doesn't exist at all as it is that we misapprehend its nature. How is it that you alone are in such a privileged position to know things that are hidden from the rest of the human race? How is it that, despite your enormous wisdom and magnificent insights, you are never able to do anything other make unsupported assertions? Why can you never provide any evidence or reasoned argument? All you do is repeat the same assertions over and over as if they were fact, rather than just your personal beliefs. Then you decide to leave the forum for a while. Only to pop up again a few months later making the same unsubstantiated boasts. It is tedious and bad mannered. Please stop doing it. 1
EdEarl Posted April 4, 2016 Posted April 4, 2016 This statement makes no sense, "intelligence" doesn't exist at all as it is that we misapprehend its nature. You say, "intelligence doesn't exist," and in the same sentence say it does when you say, "we misapprehend its nature." Strange challenged the statement, and you dance around without answering his question, cladking. You might rewrite the sentence to make it more clear to us, or something to the point. If we don't understand you, repeating the words louder doesn't help (I know because I've done that all my life, it's a hard habit to break). Think of another way of saying what you mean.
cladking Posted April 4, 2016 Posted April 4, 2016 What on Earth makes you think that. Where have I made any such claim. (I'm not even sure what it means.) You're simply ignoring the point. It doesn't matter to you what intelligence is because you know you have it in abundance. How can you not be intelligent when you can learn so much and tell the teacher anything he wants to hear? You know that any creature that would allow itself to be glued to a car can't be intelligent. How is it that you alone are in such a privileged position to know things that are hidden from the rest of the human race? I never said nor do I believe such a thing. The only real difference between us is you can't imagine that language is as important as thought. How is it that, despite your enormous wisdom and magnificent insights, you are never able to do anything other make unsupported assertions? Again you're saying things that aren't real. Even were they real you can't understand any assertion if you aren't even trying. There's simply nothing complex about anything I'm saying. Why can you never provide any evidence or reasoned argument? Apparently the only "reasoning" you understand is math. There are no equations that govern how an insect navigates the car to which it is glued. Then you decide to leave the forum for a while. I can handle the gainsaying and lack of understanding but it's much harder to handle posts that disappear. It is tedious and bad mannered. Please stop doing it. It's quite rude of you to point it out. Your refusal to address my points may be even more tedious. I address ALL of your points and you almost never address ANY of mine. I asked you earlier how we could invent artificial intelligence or machine intelligence if we don't what intelligence even is. This statement makes no sense, You say, "intelligence doesn't exist," and in the same sentence say it does when you say, "we misapprehend its nature." Strange challenged the statement, and you dance around without answering his question, cladking. You might rewrite the sentence to make it more clear to us, or something to the point. If we don't understand you, repeating the words louder doesn't help (I know because I've done that all my life, it's a hard habit to break). Think of another way of saying what you mean. "It's not so much that "intelligence" doesn't exist at all as it is that we misapprehend its nature." I do have a "shorthand" way of talking that can be confusing. I'm merely suggesting that "intelligence" as percieved by most people isn't the result of a quick wit or the ability to think deeply or to come up with new ideas. Most people consider "intelligence" to be a state, a condition that applies to some people more than others and, no doubt, more to Strange than almost anyone. But this is a mistaken idea that is derived from thought and an understanding of many centuries of other peoples' thinking. It is not consistent with facts like that chimps can beat college students in some games involving "intelligence". It's not consistent with many established facts. These are the same facts ignored over and over because people tend to interpret the reality right out of the facts. "Intelligence" to the degree it exists at all is an event. I prefer to call it a "manifestation of cleverness" because anyone or anything can be clever but "intelligent" people can in some cases rarely be clever. Cleverness is usually easy to recognize because it's an idea that arises spontraneously and usually from pre-existing knowledge. We are talking about Ai without even understanding the nature of "i". The Turing test is a misdirection because allit does is to program a computer to manipulate words. Of what value are words if people can't understand or speak to such simple concepts as I am. There is nothing complicated about the idea that humans aren't intelligent. It seems to me that any half way "intelligent" person should be able to grasp the idea even if he doesn't agree. At the very least it might open up a dialog about the validity of the current direction of research or the nature of what we do call "intelligence". -2
dimreepr Posted April 5, 2016 Posted April 5, 2016 We are talking about Ai without even understanding the nature of "i". Leaving aside the nature of intelligence, your confusion is because of the “I”, the rest of us are discussing AS (Artificial sentience).
EdEarl Posted April 5, 2016 Posted April 5, 2016 "Intelligence" to the degree it exists at all is an event. I prefer to call it a "manifestation of cleverness" because anyone or anything can be clever but "intelligent" people can in some cases rarely be clever. Cleverness is usually easy to recognize because it's an idea that arises spontraneously and usually from pre-existing knowledge. @cladking I selected this paragraph as an example; it is characteristic of your writing. I'd like you to relate intelligence to something measurable, but you elect to relate it to cleverness, both are ambiguous. Consequently, your writing remains incomprehensible. I'll briefly discuss features of our brains that can be measured, such as memory, speed, and pattern recognition. We have short term and long term memory, and we can compare the ability of various people to remember by showing them a list of words or numbers and see how many they remember after 6, 60, and 3600 seconds. We process visual, auditory, olfactory, etc. data, and it is possible to measure the rate at which we can understand these inputs. Our brains are so complex that measuring anything about them is tricky because unintended consequences are likely to confuse ones results. To minimize the effects of complexity, we need to measure simple things, not intelligence or cleverness.
cladking Posted April 5, 2016 Posted April 5, 2016 Leaving aside the nature of intelligence, your confusion is because of the “I”, the rest of us are discussing AS (Artificial sentience). Not really. I doubt anyone here would feel threatened by a sentient computer with the IQ of an oak tree. As you probably know most researchers have given up on the idea of AS and there is more concentration on "simulated intelligence" (AI). By whatever name anyone chooses I seriously doubt we'll sentience or intelligence until we better understand the nature of both. Perhaps 20 years is a little optimistic.
EdEarl Posted April 5, 2016 Posted April 5, 2016 Not really. I doubt anyone here would feel threatened by a sentient computer with the IQ of an oak tree. As you probably know most researchers have given up on the idea of AS and there is more concentration on "simulated intelligence" (AI). By whatever name anyone chooses I seriously doubt we'll sentience or intelligence until we better understand the nature of both. Perhaps 20 years is a little optimistic. AI is already being used to solve real problems, and it will continue to improve, regardless of whether we can define intelligence or sentience. We should be concerned that our stepwise refinement of AI will result in an uncontrollable sentient intelligence. I'm not saying it will occur, merely that the possibility exists. Whether it occurs in 20 years or 2000 should not change our concern; though, it changes the urgency.
cladking Posted April 5, 2016 Posted April 5, 2016 I selected this paragraph as an example; it is characteristic of your writing. I'd like you to relate intelligence to something measurable, but you elect to relate it to cleverness, both are ambiguous. Consequently, your writing remains incomprehensible. I did say what I mean in another way trying to relate it to something everyone can understand. Since intrelligence doesn't really exist as it is commonly understood it is best to simply not use the term. However we can all see that some people are more apt to make connections or faster to do so. Some individuals can achieve results others can't. This is primarily the result of "ideas" and those who think more quickly or more clearly are more prone to having simple or complex ideas. The generation of these ideas is what I am calling "cleverness" and are more related to sentience than to "intelligence". I'll briefly discuss features of our brains that can be measured, such as memory, speed, and pattern recognition. We have short term and long term memory, and we can compare the ability of various people to remember by showing them a list of words or numbers and see how many they remember after 6, 60, and 3600 seconds. We process visual, auditory, olfactory, etc. data, and it is possible to measure the rate at which we can understand these inputs. Our brains are so complex that measuring anything about them is tricky because unintended consequences are likely to confuse ones results. To minimize the effects of complexity, we need to measure simple things, not intelligence or cleverness. If you twist my arm I'll agree that there really is such a thing as "intelligence" but it's simply not what most people believe it is. You've done a reasonably good job of outlining its nature here. True "intelligence" is more related to speed of thought and speed is rarely impoirtant in results by itself. Without knowledge, experience, and understanding of relevant considerations intelligence has little utility. If one's experience can't be extrapolated to the situation then his intelligence is of no value. Yes, I tend to just talk louder when not understood. It's a hard habit to break and easily acquired.
EdEarl Posted April 5, 2016 Posted April 5, 2016 @cloudking I now think I understand what you have been saying, and basically concur.
dimreepr Posted April 5, 2016 Posted April 5, 2016 Not really. I doubt anyone here would feel threatened by a sentient computer with the IQ of an oak tree. As you probably know most researchers have given up on the idea of AS and there is more concentration on "simulated intelligence" (AI). By whatever name anyone chooses I seriously doubt we'll sentience or intelligence until we better understand the nature of both. Perhaps 20 years is a little optimistic. Since sentience seems to be an emergent quality of intelligence and given the OP, there would be no threat from a stupid but sentient machine or from a hyper intelligent machine, but not sentient, that simply follows its programming; and since sentience is an emergent quality of intelligence, we don’t need to understand either for it to happen. 1
cladking Posted April 5, 2016 Posted April 5, 2016 (edited) Since sentience seems to be an emergent quality of intelligence... Then a person that is twice as intelligent is twice as conscious? If a person is 1000 times smarter than an insect than it should follow he's 1000 times more conscious or sentient. Are the chimps that beat college students in games of intelligence more aware than the college students. Are elephants which can paint self portraits more intelligent or more sentient than humans who can't? What about children? Are they less aware? How about the individual beaver which invented dam building? If the terms don't fit reality then the terms must be jettisoned. If you're using terms that aren't reflected in the real world then how can you model the real world in order to deal with it or invent "ai". Our terms and language have been in use for many centuries but we now know that they don't very well describe reality. There simply aren't terms to compare the nature of a self portrait painting elephant with a child. This isn't a failure of the child or the elephant but a failure of language and using this language is causing us to see a reality that doesn't exist. It is interfering with our ability to invent machine intelligence*. * I feel justified using the term here because when speed of thought is orders of magnitude faster then cleverness can essentially become a state rather than an event. Edited April 5, 2016 by cladking
dimreepr Posted April 5, 2016 Posted April 5, 2016 (edited) Then a person that is twice as intelligent is twice as conscious? If a person is 1000 times smarter than an insect than it should follow he's 1000 times more conscious or sentient. Are the chimps that beat college students in games of intelligence more aware than the college students. Are elephants which can paint self portraits more intelligent or more sentient than humans who can't? What about children? Are they less aware? How about the individual beaver which invented dam building? If the terms don't fit reality then the terms must be jettisoned. If you're using terms that aren't reflected in the real world then how can you model the real world in order to deal with it or invent "ai". Our terms and language have been in use for many centuries but we now know that they don't very well describe reality. There simply aren't terms to compare the nature of a self portrait painting elephant with a child. This isn't a failure of the child or the elephant but a failure of language and using this language is causing us to see a reality that doesn't exist. It is interfering with our ability to invent machine intelligence*. * I feel justified using the term here because when speed of thought is orders of magnitude faster then cleverness can essentially become a state rather than an event. You seem confused about the word “emergent” not to mention “intelligent”; is an ant or bee intelligent or sentient? Moreover sentience isn’t graduated, you either are or you aren't. Edited April 5, 2016 by dimreepr
EdEarl Posted April 5, 2016 Posted April 5, 2016 Then a person that is twice as intelligent is twice as conscious? If a person is 1000 times smarter than an insect than it should follow he's 1000 times more conscious or sentient. Are the chimps that beat college students in games of intelligence more aware than the college students. Are elephants which can paint self portraits more intelligent or more sentient than humans who can't? What about children? Are they less aware? How about the individual beaver which invented dam building? If the terms don't fit reality then the terms must be jettisoned. If you're using terms that aren't reflected in the real world then how can you model the real world in order to deal with it or invent "ai". Our terms and language have been in use for many centuries but we now know that they don't very well describe reality. There simply aren't terms to compare the nature of a self portrait painting elephant with a child. This isn't a failure of the child or the elephant but a failure of language and using this language is causing us to see a reality that doesn't exist. It is interfering with our ability to invent machine intelligence*. * I feel justified using the term here because when speed of thought is orders of magnitude faster then cleverness can essentially become a state rather than an event. I think everyone agrees the brain is wonderfully complex. On the other hand, not as many think about how it is complex. Of those who think about brain complexity are neurosurgeons, psychiatrists, physiologists, AI programmers and a few others. The brain has inputs, outputs, and internal processes. Inputs are the senses and sensations, such as sight, hearing, touch, taste, smell, and internal sensations such as emotion, nutrition, and being, which include chemical receptors and neural feedback circuits. An elephant has more muscle mass than a person, and more individual muscles. The elephant trunk has 150,000 muscles. The human body about 750. Thus, the amount of brain to control an elephant's muscles must be much larger than similar brain tissue in a human. That is only one example of the differences. Thus, comparing brains of two different species by "intelligence" is apples and oranges; each species has necessary intelligence for survival, different from other species. It seems to me controlling 150,000 muscles to produce any work of art is an act of great intelligence. We know little about how an elephant perceives the world, for example, do they see perspective (i.e., 3D) the same way we see it? What do they smell. What do they taste. Etc. Inter-species comparison of intelligence has limited utility.
cladking Posted April 5, 2016 Posted April 5, 2016 I think everyone agrees the brain is wonderfully complex. On the other hand, not as many think about how it is complex. Of those who think about brain complexity are neurosurgeons, psychiatrists, physiologists, AI programmers and a few others. The brain has inputs, outputs, and internal processes. Inputs are the senses and sensations, such as sight, hearing, touch, taste, smell, and internal sensations such as emotion, nutrition, and being, which include chemical receptors and neural feedback circuits. I believe the brain is quite simple in its own very complex way. It only seems complex to us because of our perspective. We attribute characteristics of language to the individual because we must understand it through the only medium we have to think; language. People have even gone so far as to say "I think therefore I am" when the reality is we must first exist and learn language in order to think at all. Thought doesn't prove one's existence merely that language exists. Even the lowest life forms know they exist and need no founding principles to avoid predators. I believe there is a natural integration of senses, mind, and body in animals and this is the operating system of the animal brain. This operating system is also the animal language which they use to communicate within and across species (to a more limited extent). Humans must unlearn this in order to acquire our language. The problem here is very very simple; everything we are thinking, all our knowledge, and the very means we use to think are obscured by the perspective generated by language. When we think about sentience, ai, and intelligence we are actually pondering language and its effects rather than consciuousness or cleverness. When we try to program or design a computer to be "intelligent" we are actually programming it to manipulate language. It seems very improbable that sentience or consciousness will simply emerge through such an ability. That is only one example of the differences. Thus, comparing brains of two different species by "intelligence" is apples and oranges; each species has necessary intelligence for survival, different from other species. I'm in general agreement but I still don't like the term "intelligence". I'd prefer to say that most cleverness is specific to both the individual and his experience as well as to the species. An artist elephant might is far less likely to invent new techniques or processes to paint than a human artist yet more likely than a human cosmologist. But almost any beaver is far more likely to come up with a new means of building a wooden dam with no lumber than any human. I seriously doubt modeling the human brain is even possible until we can distill the structure from the operating system we call language. I'm not so sure the human brain is even best suited to machine intelligence. It's quite possible that far simpler brains like insects would be more suitable. If an insect can navigate a car then why can't it serve as the formatting for machine intelligence?
EdEarl Posted April 5, 2016 Posted April 5, 2016 I believe the brain is quite simple in its own very complex way. It only seems complex to us because of our perspective. We attribute characteristics of language to the individual because we must understand it through the only medium we have to think; language. People have even gone so far as to say "I think therefore I am" when the reality is we must first exist and learn language in order to think at all. Thought doesn't prove one's existence merely that language exists. Even the lowest life forms know they exist and need no founding principles to avoid predators. I believe there is a natural integration of senses, mind, and body in animals and this is the operating system of the animal brain. This operating system is also the animal language which they use to communicate within and across species (to a more limited extent). Humans must unlearn this in order to acquire our language. The problem here is very very simple; everything we are thinking, all our knowledge, and the very means we use to think are obscured by the perspective generated by language. When we think about sentience, ai, and intelligence we are actually pondering language and its effects rather than consciuousness or cleverness. When we try to program or design a computer to be "intelligent" we are actually programming it to manipulate language. It seems very improbable that sentience or consciousness will simply emerge through such an ability. I'm in general agreement but I still don't like the term "intelligence". I'd prefer to say that most cleverness is specific to both the individual and his experience as well as to the species. An artist elephant might is far less likely to invent new techniques or processes to paint than a human artist yet more likely than a human cosmologist. But almost any beaver is far more likely to come up with a new means of building a wooden dam with no lumber than any human. I seriously doubt modeling the human brain is even possible until we can distill the structure from the operating system we call language. I'm not so sure the human brain is even best suited to machine intelligence. It's quite possible that far simpler brains like insects would be more suitable. If an insect can navigate a car then why can't it serve as the formatting for machine intelligence? Some good points and some weak.
The Think3r Posted April 6, 2016 Posted April 6, 2016 If Artificial Intelligence were in existence at this exact moment there are several possibilities of what will happen. 1. Classic Terminator problem 2. Bio-Mechanical equality 3. Advance Rate of development
EdEarl Posted April 6, 2016 Posted April 6, 2016 If Artificial Intelligence were in existence at this exact moment there are several possibilities of what will happen. 1. Classic Terminator problem 2. Bio-Mechanical equality 3. Advance Rate of development The common use of the term AI is different than yours; IMO you refer to Artificial General Intelligence; its just a semantic difference. Although, some people see AI and deny it, which is called the AI effect. I've heard people refer to current AI systems as toys; although, some AI systems do financial transactions for people with big money, so those people probably don't consider AI as toys. Some have compared current AI to a cockroach, and say the cockroach is more intelligent, but there is little to compare when you look at the details of each one. Nonetheless, I have to agree; although, the best AI may outperform a cockroach. 1
andrewcellini Posted April 6, 2016 Posted April 6, 2016 (edited) If Artificial Intelligence were in existence at this exact moment there are several possibilities of what will happen. 1. Classic Terminator problem 2. Bio-Mechanical equality 3. Advance Rate of development AI exists at this moment, and thankfully the first of your possibilities hasn't happened yet. Here's a recent example of AI which learned to play the board game go and beating a master player 5 out of 5 games: http://www.nature.com/news/google-ai-algorithm-masters-ancient-game-of-go-1.19234 I agree with Ed, it seems you're conflating specific artificial intelligence and artificial general intelligence. This AI isn't going to take over the world but will play a mean game of go. Edited April 6, 2016 by andrewcellini
dimreepr Posted April 6, 2016 Posted April 6, 2016 (edited) AI exists at this moment, and thankfully the first of your possibilities hasn't happened yet. Here's a recent example of AI which learned to play the board game go and beating a master player 5 out of 5 games: http://www.nature.com/news/google-ai-algorithm-masters-ancient-game-of-go-1.19234 I agree with Ed, it seems you're conflating artificial intelligence and artificial general intelligence. This AI isn't going to take over the world but will play a mean game of go. As Ed suggested that is a semantic tangent to the OP, but the question remains “what is there to fear?” to which the answer seems to be a great deal, admittedly, not from this particular computer but imagine a future iteration that can think for itself and is able to outwit our best strategists? Edited April 6, 2016 by dimreepr
andrewcellini Posted April 6, 2016 Posted April 6, 2016 As Ed suggested that is a semantic tangent to the OP, but the question remains “what is there to fear?” to which the answer seems to be a great deal, admittedly, not from this particular computer but imagine a future iteration that can think for itself and is able to outwit our best strategists? Sure that's a possibility, and I left open such a possibility with the inclusion of "yet."
EdEarl Posted April 6, 2016 Posted April 6, 2016 Suppose there is a sentient artificial man (sam) who is unemotional and logical more or less like humans, except quicker. Sam might decide to be either theist, agnostic or atheist. Except for religious reasons, sam would have no reason AFAIK to kill people. If his religion were bigoted towards a group, sam might kill them and get away with it, but most would live. If sam were agnositc Buddhist, not even worms would be killed. If sam were rational, I can think of no reason anyone would be killed. And, other scenarios don't lead to human extinction, AFAIK. If the inventors gave sam emotions, then who knows what would happen.
andrewcellini Posted April 6, 2016 Posted April 6, 2016 (edited) Suppose there is a sentient artificial man (sam) who is unemotional and logical more or less like humans, except quicker. Sam might decide to be either theist, agnostic or atheist. Except for religious reasons, sam would have no reason AFAIK to kill people. What if Sam finds a human or all humans threatening to his survival? I'd think that'd be a solution in the space of possible solutions to the problem of a human pest. I'm not sure how beneficial such a decision would be but such actions may carry high utility in certain contexts, perhaps if Sam is threatened with being permanently shut down by his creator. Edited April 6, 2016 by andrewcellini
EdEarl Posted April 7, 2016 Posted April 7, 2016 If sam is truly clever, it would eliminate the threat without being implicated in anything illegal. If it isn't that clever, maybe sam should not exist.
The Think3r Posted April 7, 2016 Posted April 7, 2016 Depending on how they are treated, they could be great friends/allies. The same could be said as enemies.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now