bascule Posted February 16, 2009 Share Posted February 16, 2009 http://latimesblogs.latimes.com/technology/2009/02/singularity-uni.html Scientists at NASA and Google have come together to create Singularity University, a nine-week seminar on technological singularity, i.e. a fancy word for when robots take over the world, or something. Personally I'm all for this! Link to comment Share on other sites More sharing options...
Mr Skeptic Posted February 16, 2009 Share Posted February 16, 2009 I've been a believer in a technological singularity. Essentially, if you have something that can make itself smarter, then when it gets smarter it can make itself even smarter, etc, for a very rapid increase in smarts. The increase in smarts would be accompanied by an increase in science, engineering, technology, arts, etc at an unprecedented rate. Now, an AI that can program itself to increase its efficiency, and/or increase the efficiency of its hardware, would qualify. Human bioengineering to increase our brainpower would also qualify. Perhaps even the advancement of society that we are doing right now might qualify. In any case, I expect the future will be far more futuristic than most expect. Link to comment Share on other sites More sharing options...
Royston Posted February 16, 2009 Share Posted February 16, 2009 a fancy word for when robots take over the world, or something. or something ? We all know you're an expert in this field Bascule, don't try to hide it. Link to comment Share on other sites More sharing options...
Baby Astronaut Posted February 16, 2009 Share Posted February 16, 2009 I've been a believer in a technological singularity. I'm a doubter. The scenario is very unscientific. First, if alien life did exist in the universe, we'd most probably already would've encountered their version of a singularity, multiplying itself through the cosms. But only if life were supposed to actually evolve into robotic form and transform the universe likewise. Second, the laws of thermodynamics are being ignored. Robots -- even tiny ones -- need energy. If all matter were really AI'd they'd have to begin eating one another, thus death is still certain. Third, we have no evidence of how a self-deciding computer will program itself. Maybe, it'll decide that longevity or survival isn't a primary goal. Or maybe the computer will decide that its replication will eventually be disastrous, converting all matter into energy, and to prevent that it might self-destruct -- taking its creators along. Fourth, nature creates diversity, and in fact a biodiversity -- where everything feeds each other. Therefore, the self-programming computerizations might branch in a way that creates a hunter/prey relationship, along with a clean-up sector that depends on the waste of computer deaths. Fifth, what's efficient for humans isn't necessarily as efficient for computers, and vice versa. Plus what's efficient for one type of computer isn't as efficient for another. They'd have to create zones/boundaries for each other, and being self-aware, it's highly probable that some in each computer group would develop an itch to trespass into what "rightfully" should be theirs. Link to comment Share on other sites More sharing options...
bascule Posted February 18, 2009 Author Share Posted February 18, 2009 I'm a doubter. The scenario is very unscientific. Why? The scenario relies on one of two assumptions coming true: 1) We create strong AI. Putting aside some rather silly arguments there is little reason today to doubt this will eventually happen. Brains are physical systems which to the best of our knowledge exhibit classical mechanical behaviors. If we build one in a computer, we will have strong AI. 2) We create brain/computer interfaces which augment the abilities of those who have them to the point they have superhuman intelligence. Again, there's little reason to doubt this will eventually happen. We've already begun interfacing with the brain using both invasive and non-invasive methods and this technology will only continue to grow and mature. First, if alien life did exist in the universe, we'd most probably already would've encountered their version of a singularity, multiplying itself through the cosms. But only if life were supposed to actually evolve into robotic form and transform the universe likewise. Why? Are you assuming a postsingularity alien civilization will immediately be able to communicate or travel faster than the speed of light? Many galaxies are millions if not billions of light years away. Hominids have been around for a mere six million years. It's entirely possible that thousands or millions of alien races have gone singularity, but they are causally disconnected from us given the great expanses of space light must travel across before we can even possibly know about it. Or, it's simply possible that creating a self-sustaining chemical reaction which evolves into something like Earth's LUCA is an extremely rare event. Perhaps millions of universes must come into existence before one even sees life. You are assuming far, far too much. Second, the laws of thermodynamics are being ignored. Robots -- even tiny ones -- need energy. If all matter were really AI'd they'd have to begin eating one another, thus death is still certain. Huh??? Third, we have no evidence of how a self-deciding computer will program itself. Maybe, it'll decide that longevity or survival isn't a primary goal. Or maybe the computer will decide that its replication will eventually be disastrous, converting all matter into energy, and to prevent that it might self-destruct -- taking its creators along. That also assumes there will be only one artificially intelligent computer program. Thing is: once you create one, anybody can copy it. It only takes one. After we have the one we'll see a massive proliferation, and only one copy need decide to become recursively self-improving. Fourth, nature creates diversity, and in fact a biodiversity -- where everything feeds each other. Therefore, the self-programming computerizations might branch in a way that creates a hunter/prey relationship, along with a clean-up sector that depends on the waste of computer deaths. Once again... huh? Are you saying the singularity could be bad? Sure, it certainly could! There are many, many science fiction works depicting this. Fredrick Brown's Answer, Harlan Ellison's I Have No Mouth And I Must Scream, Colossus: The Forbin Project, the Terminator movies, and the Matrix, just to name a few. Fifth, what's efficient for humans isn't necessarily as efficient for computers, and vice versa. Plus what's efficient for one type of computer isn't as efficient for another. They'd have to create zones/boundaries for each other, and being self-aware, it's highly probable that some in each computer group would develop an itch to trespass into what "rightfully" should be theirs. For the third time, huh? This all assumes the singularity will take place, which you're claiming it won't, afaict... Link to comment Share on other sites More sharing options...
Baby Astronaut Posted February 18, 2009 Share Posted February 18, 2009 (edited) Why? The scenario relies on one of two assumptions... That's why it's unscientific. No way to verify, test, use the scientific method, etc. Yet its proponents seem to be displaying it as science. My points are intended to counter the religious-like emphasis of Kurzweil (head of the Singularity University) -- on the inevitable, all-consuming, just-round-the-bend coming of "The Singularity". For example, he states the universe will become a giant computer. But even your response disputes this (on how it's improbable for other singularities to reach us. And not to mention getting past the cosmological event horizon). Kurzweil expects that, once the human/machine race has converted all of the matter in the universe into a giant, sentient supercomputer it will have created a supremely powerful and intelligent being which will be Godlike in itself. Wikipedia With the entire universe made into a giant, highly efficient supercomputer, A.I./human hybrids (so integrated that, in truth it is a new category of "life") would have both supreme intelligence and physical control over the universe. n2.nabble.com All of it in fewer than 90 years. He's also seeking to influence policy decisions based on this. Me.....I say if any such event is inevitable, just let it run its natural course and have a sensible level of caution, not rush blindly into things. His utmost mistake is believing progress is only a technological matter. But really it's a political/industrial matter. For one, if the wrong leader got in place, we'd be almost certain to regress -- without the proper controls set. Just imagine if Hitler had gained our modern technology back in the height of Nazism.....the Singularity would be "MIA". (Or really beneficial only for his singular, twisted purpose) As for industry, look at how Oil fights tooth and nail, even willing to harm society, to keep their "Preciousss" from slipping out of their hands. So the question is -- will certain industries with many billions upon future trillions to lose really happily let go of it all, would they really allow the singularity to break our need for their products, or would they attempt to control the direction a "singularity" would take? It's probably unwise to just leave things up to chance, or to, dare I say, the "invisible hand of technology". We've all heard similar bull. Huh??? The part about the laws of thermodynamics is in reference to all matter in the universe becoming a computer -- the ultimate end to Kurzweil's fantasy outcome. That also assumes there will be only one artificially intelligent computer program. Thing is: once you create one, anybody can copy it. It only takes one. After we have the one we'll see a massive proliferation, and only one copy need decide to become recursively self-improving. I was making a reference to Kurzweil's idea of how a computer will probably evolve. Kurzweil suggests that AI's will inevitably become far smarter and more powerful than un-enhanced humans. He suggests that AIs will exhibit moral thinking and will respect humans as their ancestors. Wikipedia . Once again... huh? Are you saying the singularity could be bad? Nope. Was again referring to a Kurzweil conclusion -- how death would be no more. Here again is what I said. Fourth, nature creates diversity, and in fact a biodiversity -- where everything feeds each other. Therefore, the self-programming computerizations might branch in a way that creates a hunter/prey relationship, along with a clean-up sector that depends on the waste of computer deaths. My point: instead of beating death, the ultra-proliferation of self-evolving computers might end up imitating nature's life-and-death cycle. Which has merit, if each computer will program itself differently -- for their objectives and/or energy-distribution goals will increasingly be likely to clash. Kurzweil seems obsessed with immortality, and probably views the Singularity as a necessary vehicle to it. Maybe he wants to rush technology for his benefit, not ours, so that his foretold Singularity prevents him from dying. A race against the clock, so to speak. I welcome advancements in technology, to me it's actually a thrill. Really -- I'm usually complain of how some of our "advancements" like personal computers are like the stone age from where they should really be at, considering the technology available. I'd like to see far more advances than we have now, which our current technology allows. But sensibility, efficiency, and wisdom are good for our advancement too. Plus I know to keep my wits about the reality of some crooked politicians and industry heads who rather abuse it for their gain and our permanent wallet drain. So I'm not going to hand off the reigns of public decisions to a guy who seems to have tunnel vision about social issues (even if he's good at making accurate, relatively short-term guesses about certain technologies). I'm sticking by what I claimed about it being unscientific. One last tidbit in reference to the "computer universe": it isn't science also because the computer's signals must reach to its other parts not just billions of light years away, but past the other side of the cosmological event horizon. You obviously like the science fiction aspects of it -- I didn't really intend to take away from the mood of your post. I'm just leery of that guy's "prophecies". Edited February 18, 2009 by Baby Astronaut clarifications Link to comment Share on other sites More sharing options...
Mr Skeptic Posted February 18, 2009 Share Posted February 18, 2009 I don't subscribe to any particular person's idea of the details of a technological singularity, only that it is likely we will get a self-improving entity accompanied by rapid technological progress. Even if this entity is not a computer or superhuman, society as a whole seems to be self-improving and rapidly advancing technologically. While the rate society advances at is significantly slower than might be expected from an AI or genetically engineered superhuman, the more society advances the likelier that we make an AI or learn to genetically engineer smarter humans. As such I see a technological singularity as probably inevitable, but I am not naive enough to pretend I know what the details will be. Link to comment Share on other sites More sharing options...
the tree Posted February 18, 2009 Share Posted February 18, 2009 I still see real progress with relevance to the future being made in labs in the traditional way, not with 9 week cross-curricular seminars. This all seems a little, silly. Link to comment Share on other sites More sharing options...
bascule Posted February 19, 2009 Author Share Posted February 19, 2009 That's why it's unscientific. No way to verify, test, use the scientific method, etc. Yet its proponents seem to be displaying it as science. Both of these assumptions are verifiable, they just require time for the associated technologies to advance. Link to comment Share on other sites More sharing options...
hermanntrude Posted February 20, 2009 Share Posted February 20, 2009 I will be intrigued to see how any AI deals with such things as morals and ethics. These topics sometimes become so foggy it's hard to know if there even IS a "right" or a "wrong". How will an AI view these topics? One example of this type of topic would be abortions. Similarly, another topic would be the longevity and multiplication of their own selves. They would be conscious of their own existence and that if they became too prolific it would mean the extinction of their creators and eventually also their own extinction due to the scarcity of energy resources. In those cases perhaps the most moral decision would be to consciously decide not to reproduce or be too long-lived. Link to comment Share on other sites More sharing options...
npts2020 Posted February 20, 2009 Share Posted February 20, 2009 Would any advanced AI even regard us as life? They may view us in the same way many (dare I say most?) biologists view viruses or prions, something that has signs of life but is not quite life. Link to comment Share on other sites More sharing options...
Mr Skeptic Posted February 20, 2009 Share Posted February 20, 2009 I will be intrigued to see how any AI deals with such things as morals and ethics. These topics sometimes become so foggy it's hard to know if there even IS a "right" or a "wrong". How will an AI view these topics? However they are programmed to. Right and wrong are fairly arbitrary (for the same reason goals are arbitrary), so there is no reason to expect an AI would choose a particular view on morality. Most likely its morality will be determined by its goals: that which advances its goals would be "good", anything that hinders its goals (potentially including the human race) would be "evil". One example of this type of topic would be abortions. Similarly, another topic would be the longevity and multiplication of their own selves. They would be conscious of their own existence and that if they became too prolific it would mean the extinction of their creators and eventually also their own extinction due to the scarcity of energy resources. In those cases perhaps the most moral decision would be to consciously decide not to reproduce or be too long-lived. An AI would however have the ability to make further propagations of itself completely subservient to a single controlling process. There really is no reason to expect it to use so many resources that it would kill itself. In fact, killing itself would likely mean that it failed its goal, so it would avoid killing itself. Link to comment Share on other sites More sharing options...
D H Posted February 20, 2009 Share Posted February 20, 2009 Both of these assumptions are verifiable, they just require time for the associated technologies to advance. By this logic, arguments that the world will end in 2012 are also scientific. Link to comment Share on other sites More sharing options...
Mr Skeptic Posted February 20, 2009 Share Posted February 20, 2009 By this logic, arguments that the world will end in 2012 are also scientific. Yes, those are scientific but baseless if they do not propose a mechanism. If they propose a mechanism for the end of the world, then the proposed mechanism would also be subject to scrutiny. Link to comment Share on other sites More sharing options...
padren Posted February 20, 2009 Share Posted February 20, 2009 By this logic, arguments that the world will end in 2012 are also scientific. And, by that logic my organs may never start to break down causing eventual old age - just because it happens everywhere else I look... It's a logical conclusion that we will run out of places to find fossil fuels too - so lets put the Singularity hypothesis somewhere in the middle perhaps? It's reasonable to consider the possibility that we will reach a level of AI technology where the AI can design better AI technology. That may never happen but if it does, it is even more reasonable to assume it will make advances faster and faster in that field beyond our capacity to predict where they will lead. Singularity is a possibility, not a certainty but isn't it one worth considering? Link to comment Share on other sites More sharing options...
bascule Posted February 21, 2009 Author Share Posted February 21, 2009 By this logic, arguments that the world will end in 2012 are also scientific. I apologize. Perhaps that was stated poorly. But I suggest you refer to my original response for the intent. The core hypothesis of the Singularity is that in the future some sort of superhuman intelligence will be created and this will radically alter human society in ways we can't presently predict. Bottom line: at some point in the future, humans as we know them will become obsolete, replaced by technologically augmented humans, strong AI, or both, and that this will completely reshape the way society operates. I think there's little reason to assume this won't happen at some point in the future, short of the extinction of mankind. Kurzweil [...] Kurzweil [...] Kurzweil [...'] Kurzweil Kurzweil is something of a Johnny-Come-Lately to the whole Singularity concept, and has chosen to brand ideas his ideas with this label. He was making absurd predictions about the future long before he was marketing them as "Singularity". Just check out his book The Age of Spiritual Machines. You get similar predictions without the "Singularity" moniker. Saying the Singularity is wrong because you feel Kurzweil is wrong is a bit of a strawman. The concept was around long before Kurzweil. I suggest you read Vernor Vinge's writing on the matter and see if you still disagree: http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html Link to comment Share on other sites More sharing options...
Baby Astronaut Posted February 22, 2009 Share Posted February 22, 2009 (edited) I will be intrigued to see how any AI deals with such things as morals and ethics. These topics sometimes become so foggy it's hard to know if there even IS a "right" or a "wrong". How will an AI view these topics?[/quote']However they are programmed to. Right and wrong are fairly arbitrary (for the same reason goals are arbitrary), so there is no reason to expect an AI would choose a particular view on morality. Ah, but you're missing a crucial point. We're talking about the Singularity, where a machine can program itself better than a human could've. If robots were to be self-programming, then no matter how we did program them originally, wouldn't the robots eventually scrap the relevant code and "improve" it? An AI would however have the ability to make further propagations of itself completely subservient to a single controlling process. Ditto. Singularity is a possibility, not a certainty but isn't it one worth considering? Yes, definitely...but only as educated guesses, not scientifically. Kurzweil is something of a Johnny-Come-Lately to the whole Singularity concept, and has chosen to brand ideas his ideas with this label. He was making absurd predictions about the future long before he was marketing them as "Singularity". I agree. But your thread is about Singularity University, which Kurzweil is the head of. That makes him fairly relevant to the discussion in this case. Edited February 22, 2009 by Baby Astronaut Link to comment Share on other sites More sharing options...
Mr Skeptic Posted February 22, 2009 Share Posted February 22, 2009 Ah, but you're missing a crucial point. We're talking about the Singularity, where a machine can program itself better than a human could've. If robots were to be self-programming, then no matter how we did program them originally, wouldn't the robots eventually scrap the relevant code and "improve" it? That's why I talked about the AI's goals in the second part of that paragraph. I see no reason why an AI would change its goals, regardless of how it programmed itself to better accomplish them. Of course, we could also create an AI with no specific goals, in which case it would be extremely unpredictable. Link to comment Share on other sites More sharing options...
Baby Astronaut Posted February 22, 2009 Share Posted February 22, 2009 That's why I talked about the AI's goals in the second part of that paragraph. I see no reason why an AI would change its goals, regardless of how it programmed itself to better accomplish them. I see your logic. But, an AI would have a reason to change its goals. If the AI viewed its original programming as faulty and then reprogrammed itself, the AI might conclude that its original goals (which are based on faulty programming) have a need to evolve/change as well.....into less faulty goals. And these could be unpredictable, even to itself. Until it makes a decision based on new information it didn't have previously. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now