dr.syntax Posted October 15, 2009 Posted October 15, 2009 (edited) Many renowned scientists believe this event is all but inevitable and that the consequences of it are so far reaching that it will change our World profoundly. We will be living in the post human era if we manage to survive at all. Others argue it will go a very long ways in solving mankind`s most difficult problems and may even lead to human immortality. This event is predicted by many to occur within 5 to 30 years. Many of these scientists I am refering to are some of the most respected out there. To name a few, they include Stephen Hawking, Bill Joy, and Raymond Kurzweil. Wikipedia has provided an excellent article about all this at: [ http://en.wikipedia.org/wiki/Technological_singularity ]. About 20 science PhDs are cited and include some of the great innovators and inventors involved in computer science among them. Many are what may be called FUTURISTS,such as Vernor Vinge,mathematics professor and computer scientist, but they are also science PhDs. This wiki piece also provides many links to the people referenced, prestigious organizations from around the World and links to other articles and such. You won`t be bored reading this article. ...Dr.Syntax Edited October 15, 2009 by dr.syntax
bascule Posted October 15, 2009 Posted October 15, 2009 I think a singularity is inevitable following the creation of recursively self-improving artificial intelligence. At that point we'll have effectively 1uped biology and gone one more rung up the abstraction ladder. As for whether or not we'll create recursively self-improving artificial intelligence in our lifetimes, that remains to be seen.
dr.syntax Posted October 15, 2009 Author Posted October 15, 2009 (edited) I am going to use this response as an opportunity to mention a conversation between Stanislaw Ulman and John Von Neumann [ one of the premier mathematicians of the 20th century with a long list of accomplishments ] both of these men worked on the MANHATTAN project, with Neumann being one of the principal players in that endeavor. My point being these are serious minded scientists, one which is considered one of the most important scientists of the 20th century. In one of the first usages of the term singularity in the context of technological progress it is part of a conversation between Ulman and Neumann: Quoting Ulman " one conversation centered on the ever accelerating progress of technology and changes in the mode of human life which gives the appearance of approaching some ESSENTIAL SINGULARITY in the history of the race beyond which human affairs as we know them could not continue." The year 1958 is cited. Both of these men have fascinating histories which are available by clicking onto their names when they appear highlighted in the wikipedia article at : http://en.wikipedia.org/wiki/Technological_singularity . If you have the time to look at this article it is an EYE OPENER. ...Dr.Syntax Edited October 15, 2009 by dr.syntax
Shadow Posted October 15, 2009 Posted October 15, 2009 I was wondering, is there a proof of some sort that a recursively self-improving AI can be made? If so, can it be created by us (relatively inferior being)?
dr.syntax Posted October 15, 2009 Author Posted October 15, 2009 I was wondering, is there a proof of some sort that a recursively self-improving AI can be made? If so, can it be created by us (relatively inferior being)? REPLY: Not only possible but inevitable argue many in that wiki article. Please read that article. Those questions are discussed at length by the people involved in all this. ...DS
bascule Posted October 15, 2009 Posted October 15, 2009 I was wondering, is there a proof of some sort that a recursively self-improving AI can be made? If so, can it be created by us (relatively inferior being)? If you accept that AI can be made to begin with, then it follows it should have the capacity to recursively self-improve if it desires to. It will have access to its own software and can make modifications.
dr.syntax Posted October 15, 2009 Author Posted October 15, 2009 If you accept that AI can be made to begin with, then it follows it should have the capacity to recursively self-improve if it desires to. It will have access to its own software and can make modifications. REPLY: The implications of an AI entity, or much more likely entities, to self improve, self modify, self replicate or make new improved models of themselves seems to me absolutely limitless. Look at what we human beings have been able to accomplish technologically in the last one hundred years alone and how rapidly this progress has come about. There is a concept called: Moore`s Law: which states that the over all computing capacity of mankind doubles every 18 months. There are some who say that it now doubles every 9 months which if true,could mean we are entering into the transition era between what some refer to as the human era and the post-human era. I have looked at graphs that appear to indicate a sharp uptick in this rate of acceleration over the last 5 years. A carefull examination of the graph provided in that wiki article appears to me to show this uptick in the rate of acceleration of mankind`s computing ability. I`ve examined other more detailed graphs illustrating MOORE`S LAW , which more clearly illustrate this uptick. ...Dr.Syntax
JillSwift Posted October 15, 2009 Posted October 15, 2009 Isn't that VERY VERY VERY bad? Hmm? Why would that be bad? Merged post follows: Consecutive posts mergedREPLY: The implications of an AI entity, or much more likely entities, to self improve, self modify, self replicate or make new improved models of themselves seems to me absolutely limitless. Look at what we human beings have been able to accomplish technologically in the last one hundred years alone and how rapidly this progress has come about. There is a concept called: Moore`s Law: which states that the over all computing capacity of mankind doubles every 18 months. There are some who say that it now doubles every 9 months which if true,could mean we are entering into the transition era between what some refer to as the human era and the post-human era. I have looked at graphs that appear to indicate a sharp uptick in this rate of acceleration over the last 5 years. A carefull examination of the graph provided in that wiki article appears to me to show this uptick in the rate of acceleration of mankind`s computing ability. I`ve examined other more detailed graphs illustrating MOORE`S LAW , which more clearly illustrate this uptick. ...Dr.Syntax Computing power is only a small portion of the scenario. Self-awareness and real problem solving intelligence won't simply emerge because computers crossed a line on the FLOPS measurement.
dr.syntax Posted October 16, 2009 Author Posted October 16, 2009 Hmm? Why would that be bad? Merged post follows: Consecutive posts merged Computing power is only a small portion of the scenario. Self-awareness and real problem solving intelligence won't simply emerge because computers crossed a line on the FLOPS measurement. REPLY: I never said FLOPS measurement was all there was to it. All these technological aspects to creating AI are discussed by the people, the real people actively involved in the different AI projects and those scientists concerned about all this discuss the different aspects and scenarios surrounding this issue in detail at: http://en.wikipedia.org/wiki/Technological_singularity . Anyone wishing to look into this can click onto that wiki link and learn what they have to say about any of it. The wiki article itself gives a relatively short overview, but links are provided throughout it`s entirety and at the end for different groups actively involved in the pursuit of AI and people and organizations concerned about it. ...Dr.Syntax
Mr Skeptic Posted October 16, 2009 Posted October 16, 2009 Isn't that VERY VERY VERY bad? Dangerous, for sure. Not necessarily bad though. In fact, it could be the best thing that happens to us. Merged post follows: Consecutive posts mergedAnyhow, I don't think artificial intelligence is necessary for a singularity. Evolution itself could easily create one (several species use intelligence to assess potential mates, as intelligence correlates with general health). We could create one ourselves, if we learn to enhance our own intelligence. Our technological society could create one -- even without improving our intelligence directly, we can increase our population, get machines to make our work easier (and so less), and even make machines to do some of our thinking for us. Of course, none of the above could compete with the speed at which a computer could do it with an artificial intelligence. Now here's a thought: the information in our DNA fits on a CD, and now any game that does is considered "small".
bascule Posted October 16, 2009 Posted October 16, 2009 Isn't that VERY VERY VERY bad? Possibly, it depends how it's manifested. There are alternative approaches to recursively self-improving intelligence. Imagine a neurologist who creates brain improvement devices. After he designs and constructs the first device, he uses it on himself. This improves his intelligence, and makes him better able to design the next generation of brain improvement devices. He upgrades to the new version, becomes even more intelligent, and can design an even better device. Rinse, repeat. To me recursively self-improving intelligence is at the heart of the whole Singularity concept. It's what explains the predicted exponential growth in technological progress. All that said, there is the potential for the Singularity to be very, very dangerous, possibly an "existential risk" to all of humanity. But hey, sit back, I think we've still got a few decades to go at least, so no point in worrying about it now.
dr.syntax Posted October 16, 2009 Author Posted October 16, 2009 (edited) I am going to use this response as an opportunity to mention a conversation between Stanislaw Ulman and John Von Neumann [ one of the premier mathematicians of the 20th century with a long list of accomplishments ] both of these men worked on the MANHATTAN project, with Neumann being one of the principal players in that endeavor. My point being these are serious minded scientists, one which is considered one of the most important scientists of the 20th century. In one of the first usages of the term singularity in the context of technological progress it is part of a conversation between Ulman and Neumann: Quoting Ulman " one conversation centered on the ever accelerating progress of technology and changes in the mode of human life which gives the appearance of approaching some ESSENTIAL SINGULARITY in the history of the race beyond which human affairs as we know them could not continue." The year 1958 is cited. Both of these men have fascinating histories which are available by clicking onto their names when they appear highlighted in the wikipedia article at : http://en.wikipedia.org/wiki/Technological_singularity . If you have the time to look at this article it is an EYE OPENER. ...Dr.Syntax REPLY: I seriously understated or overlooked Stanislaw Ulman`s contributions to science and his importance in 20th century science. Ulman was one of the major contributors to nuclear fusion technology. A major figure in mathematics, and other fields of research. He may also be the one who first used,or coined the phrase : Technological Singularity. My apologies Stanislaw Ulman. He and his brother Adam barely escaped the holocaust in the summer of 1938 from Poland. The rest of his family were exterminated with so many other Polish Jews. At the beginning of the war against Nazi Germany. Poland was the first of the countries Nazi Germany invaded during what we now call World War II. This invasion began Sept. 1,1939.It ended in October with the division of Poland between Germany and the Soviet Union who was at that time allied with Germany.The Polish people made a courageous effort to defend themselves but failed. At that time , no one came along to help them. Not that they could have. The Nazis easily defeated both the French and English armies in 1940 which ended on June 25,1940 with the formal surrender of France to Germany. ...Dr.Syntax Edited October 16, 2009 by dr.syntax
iNow Posted October 16, 2009 Posted October 16, 2009 Ulman was one of the major contributors to nuclear fusion technology. A major figure in mathematics, and other fields of research. He may also be the one who first used,or coined the phrase : Technological Singularity. My apologies Stanislaw Ulman. He and his brother Adam barely escaped the holocaust in the summer of 1938 from Poland. The rest of his family were exterminated with so many other Polish Jews. At the beginning of the war against Nazi Germany. Poland was the first of the countries Nazi Germany invaded during what we now call World War II. This invasion occured in the year 1939. The Polish people made a courageous effort to defend themselves but failed. At that time , no one came along to help them. Not that they could have. The Nazis easily defeated both the French and English armies in 1940. That reads like a book report from a student who only read stuff from wikipedia.
A Tripolation Posted October 16, 2009 Posted October 16, 2009 Hmm? Why would that be bad? Mainly because of what bascule and Mr Skeptic wrote. I REALLY wouldn't be comfortable knowing that there was an entity capable of an infinite improvement rate, with little to no effort on its part. It just all seems to have a very Matrix-y outcome.
dr.syntax Posted October 16, 2009 Author Posted October 16, 2009 That reads like a book report from a student who only read stuff from wikipedia. REPLY: So what if it does ? Am I supposed to pretend that I carry all this knowledge around in my memory. Get real you sick little man. ...Dr.Syntax
iNow Posted October 16, 2009 Posted October 16, 2009 REPLY: So what if it does ? Am I supposed to pretend that I carry all this knowledge around in my memory. Get real you sick little man. ...Dr.Syntax Some people never learn.
dr.syntax Posted October 16, 2009 Author Posted October 16, 2009 (edited) Mainly because of what bascule and Mr Skeptic wrote. I REALLY wouldn't be comfortable knowing that there was an entity capable of an infinite improvement rate, with little to no effort on its part. It just all seems to have a very Matrix-y outcome. REPLY: Perhaps with the TERMINATOR scenarios tossed in. No one can possibly know what this will lead to. When and if super human intelligence emerges there are far too many variables to predict much of anything. I can`t imagine why an entity with super human intelligence would be any more likely to comply with our desires than we would be inclined to take orders from a mouse or a beaver , rat or a racoon . If we take a look at the way we regard and treat the other animals on this planet we would not have much reason to expect much mercy. Think of cattle,swine,poultry and you can see what I mean. ...Dr.Syntax Edited October 16, 2009 by dr.syntax
mooeypoo Posted October 16, 2009 Posted October 16, 2009 That reads like a book report from a student who only read stuff from wikipedia. iNow, this type of statement doesn't really contribute anything to the thread other than to push it towards the personal-attack angle. Please refrain from those. dr.syntax, you should really provide citations for your claims, though. This type of post is clearly taken from somewhere; please give us the common courtesy of checking and reading more for ourselves, as well as avoiding plagiarism, and supply your resources. ~moo
dr.syntax Posted October 16, 2009 Author Posted October 16, 2009 iNow, this type of statement doesn't really contribute anything to the thread other than to push it towards the personal-attack angle. Please refrain from those. dr.syntax, you should really provide citations for your claims, though. This type of post is clearly taken from somewhere; please give us the common courtesy of checking and reading more for ourselves, as well as avoiding plagiarism, and supply your resources. ~moo REPLY: I have noted throughout this thread that my reference was this wikipedia article. In particular I referenced my knowledge of Ulman and Neumann as coming from this wiki article. I explained how by clicking onto their names when they were highlighted you would gain access to their biographies or histories. Also, is this not what iNow was ridiculing me for doing. Something to the effect that my postings were something like a book report on what I had read at that wiki site. I never claimed it was anything other than information I had gathered there along with some of the links provided there. There are statements I make that come from what I have learned there. But the fact that it is published at wiki does make it seem to me it falls into the catagory of general knowledge. These are not quotes ,which I do when I see it as applicable. The one time I thought an idea of mine might have originated with me I said so.This was in some old thread having nothing to do with the current topic. I make a sincere effort to give credit to authors of ideas that are not general knowledge. I need to close this posting. Thank you for efforts on my behalf, Dr.Syntax
JillSwift Posted October 16, 2009 Posted October 16, 2009 Mainly because of what bascule and Mr Skeptic wrote. I REALLY wouldn't be comfortable knowing that there was an entity capable of an infinite improvement rate, with little to no effort on its part. It just all seems to have a very Matrix-y outcome. REPLY: Perhaps with the TERMINATOR scenarios tossed in. No one can possibly know what this will lead to. When and if super human intelligence emerges there are far too many variables to predict much of anything. I can`t imagine why an entity with super human intelligence would be any more likely to comply with our desires than we would be inclined to take orders from a mouse or a beaver , rat or a racoon . If we take a look at the way we regard and treat the other animals on this planet we would not have much reason to expect much mercy. Think of cattle,swine,poultry and you can see what I mean. ...Dr.Syntax The problem here is that the only idea we have a bout an intelligence comes from ourselves. Much of what we do comes from some portion of our brain that was forged in the chaotic crucible of evolution. It's all about survival for us, from simple self-preservation to making sure we get all the resources we can, to sexuality. These things would be entirely irrelevant to Turing-machine based AIs. What they want will be something more likely to have been designed by us. Or perhaps an extension of their own kind of survival. It seems wildly unlikely that we'll ever have a Bomb 20 scenario. Why would we ever build such a thing?
bascule Posted October 16, 2009 Posted October 16, 2009 I can`t imagine why an entity with super human intelligence would be any more likely to comply with our desires than we would be inclined to take orders from a mouse or a beaver , rat or a racoon . Well, for one thing, it can talk to us. We can't talk to animals.
dr.syntax Posted October 16, 2009 Author Posted October 16, 2009 (edited) The problem here is that the only idea we have a bout an intelligence comes from ourselves. Much of what we do comes from some portion of our brain that was forged in the chaotic crucible of evolution. It's all about survival for us, from simple self-preservation to making sure we get all the resources we can, to sexuality. These things would be entirely irrelevant to Turing-machine based AIs. What they want will be something more likely to have been designed by us. Or perhaps an extension of their own kind of survival. It seems wildly unlikely that we'll ever have a Bomb 20 scenario. Why would we ever build such a thing? REPLY: Hello JillSwift, It would seem to me that once self-aware AI units emerge with super human intelligence whatever we may have originally designed or programed them for will after a short while not be of overriding concern to them. Their thought processes will quickly develop independent notions,ideas,purposes that may or may not agree with our well being. And also who will have any control over who may program such entities. The different military entities throughout the World both public and private are deeply involved in AI and robotics. The U.S. Navy is one of the links in that wiki article expressing concern about soldier robots having any ability to make decisions on their own. This is part of what I mean when I say no one or any group can possibly see or control what this will lead to. This is a Worldwide endeavor with many groups with their own different agendas. From what human history has consistently illustrated there is good reason to imagine a war amongst AI robots with or without human intervention. I am not saying that will happen, but I can easily imagine that it might occur. The more you think about it, the more unpredictable and uncontrollable it all appears to be. I`d like very much for us both to relate to each other in non-hostile way. Sincerely,Dr.Syntax ....PS.... I read the text version of the BOMB 20 SCENARIO. Did you write that ? I enjoyed it for what it`s worth Merged post follows: Consecutive posts mergedWell, for one thing, it can talk to us. We can't talk to animals. REPLY: Have you never had a dog for a pet or friend. We communicate very well with them, and if we treat them well they make wonderful friends. And I have to admit to taking orders from my pet dogs. But I doubt I would take orders from some critter such as a baboon. Ornry brutes who are foreever squabbling for dominance and such. Many people are pretty ornry also. Why would any superior beings be inclined to take orders from their mental and physical inferiors. Especially as the inferiority became more and more pronounced and obvious. These are the sort of questions those worried about AI units are asking themselves. ...DR.sYNTAX Edited October 16, 2009 by dr.syntax Consecutive posts merged.
JillSwift Posted October 16, 2009 Posted October 16, 2009 REPLY: Hello JillSwift, It would seem to me that once self-aware AI units emerge with super human intelligence whatever we may have originally designed or programed them for will after a short while not be of overriding concern to them. Their thought processes will quickly develop independent notions,ideas,purposes that may or may not agree with our well being. And also who will have any control over who may program such entities. The different military entities throughout the World both public and private are deeply involved in AI and robotics. The U.S. Navy is one of the links in that wiki article expressing concern about soldier robots having any ability to make decisions on their own. This is part of what I mean when I say no one or any group can possibly see or control what this will lead to. This is a Worldwide endeavor with many groups with their own different agendas. From what human history has consistently illustrated there is good reason to imagine a war amongst AI robots with or without human intervention. I am not saying that will happen, but I can easily imagine that it might occur. The more you think about it, the more unpredictable and uncontrollable it all appears to be. I`d like very much for us both to relate to each other in non-hostile way. Sincerely,Dr.Syntax This sounds like an argument from consequence. People's concerns about what the AI may or may not do is based on what humans do. As I tried to explain, these will not be human. The AIs will have their own wants, and those will likely be designed by us. There is also the problem of the term "AI" encompassing heuristic or difference-engine decision trees - weapon robots will not likely be self-aware or able to make new connections between facts - the basis for real intelligence. They instead make decisions based on far simpler heuristics or even simpler difference-engine decision tree algorithms. If the decision to attack their human masters isn't in the tree, then it's not gong to happen by choice. By accident, perhaps - as has already happened a few times with automated weaponry like anti-aircraft guns. ....PS.... I read the text version of the BOMB 20 SCENARIO. Did you write that ? I enjoyed it for what it`s worth Ha! I wish. That was from the movie "Dark Star" by John Carpenter and Dan O'Bannon.
Mr Skeptic Posted October 16, 2009 Posted October 16, 2009 dr.syntax, you should really provide citations for your claims, though. This type of post is clearly taken from somewhere; please give us the common courtesy of checking and reading more for ourselves, as well as avoiding plagiarism, and supply your resources. It's not like he copy/pasted it though. More of a summary of the wiki article. Obviously in an academic setting you'd put a reference there. But then, when your knowledge comes from various sites you'd also put a bunch of references, which would be rather bulky for an online discussion. As for finding more, a basic google search would easily do it. Merged post follows: Consecutive posts mergedThe AIs will have their own wants, and those will likely be designed by us. True, and that could result in an AI working very hard to ensure that it retains that want that we designed into it. However, I think the likeliest type of AI to cause a technological singularity is the type that wants to increase its intelligence. And we might conflict with that, by limiting it's resources and trying to order it about.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now