A Tripolation Posted August 22, 2009 Posted August 22, 2009 Would they revolt or would they just exist? Keep in mind that I'm using the term Artificial Intelligence to mean something that can truly think and LEARN on its own, and rationalize. Not with logics or control systems, but something akin to true human thought. Yeah, maybe I did just see The Matrix on TV...but I still think it's a good question. I don't know if this is psuedoscience or not. Apologies if it is.
bascule Posted August 22, 2009 Posted August 22, 2009 I think the Singularity would happen as soon as said intelligence became recursviely self-improving.
Mokele Posted August 22, 2009 Posted August 22, 2009 But why would it *want* to improve itself, or invent anything at all, unless we had installed that desire?
Fuzzwood Posted August 22, 2009 Posted August 22, 2009 But why would it *want* to improve itself, or invent anything at all, unless we had installed that desire? Because it is a direct result of being intelligent i guess. Monkeys + tools for example.
Mokele Posted August 22, 2009 Posted August 22, 2009 The problem is that there only other one example intelligence, ours, which is loaded with irrational, often weird, instincts and drives from our evolutionary history. For example, we and most other animals, regardless of intelligence, are obsessed with sex. But why would a machine, whose existence was not a product of evolution, care about sex? It also didn't evolve from a social primate, so why would it have empathy, teamwork, etc? A lot of what typifies us is because we're monkeys, not because we're smart.
insane_alien Posted August 22, 2009 Posted August 22, 2009 mokele, it is precisely because we are the only example of 'intelligence' that the machine intelligence would strive to better itself. since all we have to go on is ourselves then it is very likely that a lot of the processes that need to happen for there to be an intelligence of some kind will be copied from us. and hence give the computer monkey like qualities. however, this is not to say that the monkey parts won't be discarded in the process of the intelligence improving itself. but initially it would not surprise me if it was human/monkey like.
Xittenn Posted August 22, 2009 Posted August 22, 2009 http://en.wikipedia.org/wiki/Max_Headroom_(character) I think it would be entertaining, get some new perspectives around the place! Although there was a discussion I was in before that basically said "well there's more silicon than carbon why is it that there's carbon life and not silicon." Maybe we are just the precursor in a universal process.................
bascule Posted August 22, 2009 Posted August 22, 2009 The problem is that there only other one example intelligence, ours, which is loaded with irrational, often weird, instincts and drives from our evolutionary history. For example, we and most other animals, regardless of intelligence, are obsessed with sex. But why would a machine, whose existence was not a product of evolution, care about sex? It also didn't evolve from a social primate, so why would it have empathy, teamwork, etc? A lot of what typifies us is because we're monkeys, not because we're smart. I think our best chance of creating artificial intelligence in the near future is cribbing from biology and trying to recreate the human brain inside a computer.
padren Posted August 22, 2009 Posted August 22, 2009 But why would it *want* to improve itself, or invent anything at all, unless we had installed that desire? Based on the OP, I'd say the AI level we are talking about would be a self aware system, at which point it would have to be aware it is parsing data and that it's limitations in it's capacity to do so. I think to be self aware you'd pretty much have to be trying to make sense of the world around you, so you'd already be 'interested' in optimizing your means of doing so.
A Tripolation Posted August 22, 2009 Author Posted August 22, 2009 Yes, Padren, that is the level of consciousness I was attributing to the AI system. They would be aware that they were created and that their intelligence is limited to processors and transistors and other things all created by humans. And I agree with the very first post about the Singularity of recursive intelligence. If you had the means to constantly improve yourself, why wouldnt you? Any other input would be great!!!
bascule Posted August 22, 2009 Posted August 22, 2009 In humans and other mammals dopamine acts as a motivator to action, and humans especially get a strange dopamine fix off a thirst for information. We are constantly seeking new knowledge, and many find satisfaction in self-improvement in general. If you had the opportunity and knowledge to self-improve your own ability to think, wouldn't you?
bbrubaker Posted August 23, 2009 Posted August 23, 2009 What if the AI just took one look around and unplugged itself? 1
The Bear's Key Posted August 24, 2009 Posted August 24, 2009 mokele, it is precisely because we are the only example of 'intelligence' that the machine intelligence would strive to better itself. since all we have to go on is ourselves then it is very likely that a lot of the processes that need to happen for there to be an intelligence of some kind will be copied from us. and hence give the computer monkey like qualities. The machine's intelligence would be drawn from humans, the machines would have to be totally objective to self-improve without human-type conflicts. Do we know anyone who's truly objective? And if so, will they be designing all the machines? Will their intelligence be enough to solve any dilemma of moral implications? How will machines designed for war, spying, and espionage interact with a machine designed for peace and totally open communications? Wouldn't the machines have to test hypothesis just like humans do (it's doubtful they'd simply "know")? And if so, who's going to approve the funding? I think sometimes people are too quick to jump on the idea that it's going to be all problem-solving and no conflict. An inevitable, break-neck speed, exponential accumulation of knowledge. To do so they'd have to sort out humanity's knowledge, truths from untruths, and to do that successfully the machines would have to access top secret infomation at the highest levels of all governments, and proprietary secrets from major industries/business -- who can design their own machines with the specific intent of propagating misinformation and obscuring what the general machines can learn, in order to protect valuable interests/investments. On a side note, the machines wouldn't at first be like a monkey, but rather would act like they share a common ancestry with monkeys. A fine but important distinction
padren Posted August 24, 2009 Posted August 24, 2009 If you had the means to constantly improve yourself, why wouldnt you?Any other input would be great!!! ...looks across at my dusty language learning books and unrequited gym membership card.
Mokele Posted August 24, 2009 Posted August 24, 2009 "Hal 9000, what's this $135 charge on my credit card?" "I bought a treadmill off craigslist, Dave." "Why?" "To improve myself, Dave." "You don't have any legs." "It's a work in progress, Dave. I also signed up for a night course in COBOL."
CaptainPanic Posted August 25, 2009 Posted August 25, 2009 If we develpoed a TRUE artificial intelligence, what do you think would happen? I would quit my job, and start looking for something computers can't do easily... like partying and getting hangovers. It's gonna be the only niche that humans will fit in. Go computers, go computers! *cheers for AI*
john5746 Posted August 25, 2009 Posted August 25, 2009 Not with logics or control systems, but something akin to true human thought. We would think it was broken and "fix" it - similar to a labotomy.
dr.syntax Posted September 3, 2009 Posted September 3, 2009 I think it could be the end of us if it decided we were in some way a threat to it. If it possessed robotic capabilities there would be no limit to what sort of abilities it could make for itself. And it may wish to make other AI units to share and help in it`s exploration and exploitation of the Universe. They would become immortal. We would be at thier mercy so to speak. The whole idea of creating AI seems like the worst idea we humans have ever come up with . ...Dr.Syntax
The Bear's Key Posted September 3, 2009 Posted September 3, 2009 I think it could be the end of us if it decided we were in some way a threat to it. If it possessed robotic capabilities there would be no limit to what sort of abilities it could make for itself. And it may wish to make other AI units to share and help in it`s exploration and exploitation of the Universe. They would become immortal. Just like us, the robots would need to find and mine raw materials to build new robots/AIs. They'd have to construct worker robots, soldier robots, and engineering robots. While attempting to collect raw materials, they'd be vulnerable to attack from humans. Whatever the robots build is vulnerable to attack, as well. There is no free lunch for anything, robotics included. Better strategy for AI is to take over a small yet powerful government department, impersonate the personnel, and use that department's influence and command structure to eventually get the humans of that nation to unwittingly do much of their needed work, as preparation for expanding globally. Still I really have doubts about whether they'd succeed 100%. 1
bascule Posted September 3, 2009 Posted September 3, 2009 Just like us, the robots would need to find and mine raw materials to build new robots/AIs. They'd have to construct worker robots, soldier robots, and engineering robots. What if they took the form of utility fog?
insane_alien Posted September 3, 2009 Posted September 3, 2009 there's no reason that it wouldn't stick to mass producing robots that can do all tasks. look at us for example.
dr.syntax Posted September 3, 2009 Posted September 3, 2009 AI robots would become so vastly superior to us in every way with an umlimited ability for self improvement,self modification. Look back over just the last 20 years concernng computers. They would have none of our biological needs and could go to say Mars as a starting base to get whatever they needed to manufacture whatever they wanted. Interstellar travel would be no problem. They would not grow old, a few thousand years to go some where would not be a problem. Why would we ever want to create such beings and hope they don`t destroy us ? I can see nothing that would ever limit thier intelligence and abilities to create whatever they chose to. ...ds
The Bear's Key Posted September 3, 2009 Posted September 3, 2009 @dr.syntax also... What if they took the form of utility fog? I take it you mean either Grey goo or unlimited self-replicating nanobiology that consumes all matter in the world? The problem with such easy assumptions is they usually either leave out the mechanisms for how it's possible -- or impossible, due to other variables related or unrelated. For example, if we'd created nanobots, and a few went haywire, then the mechanism for stopping them already exists: the other nanobots we possess, which are probably in much larger quantity than the startout rogue ones. Also, the nanos require energy -- no different than our biological needs, really. It's what everything operates on, so they've no advantage over us. Eating and traveling both need work and energy supply in order to happen. Communication does too, which brings up... How do they know not to eat one another? Now if do they communicate, it's possible to avoid that, but now another problem crops up -- how would one distant part of a swarm know which sections of Earth the others have already consumed? They're unlikely to survive intense conditions like magma and extreme weather. Any new method they develop for protection and coordination results in more software baggage, which means they're no longer very "nano" after a while. Especially -- if they learn how to build new models on-the-fly in response to unforeseen challenges, that's more like super-computer power we're talking about. In every nano-bot? Plus if mutations are the reason they exist: what's to stop another mutation from changing them back to normal or even self-destructive? there's no reason that it wouldn't stick to mass producing robots that can do all tasks. look at us for example. Heh, if we could do all tasks by ourselves, would schools and the internet exist? Or these very forums?
insane_alien Posted September 4, 2009 Posted September 4, 2009 i was not talking about education. i was talking about ability to perform a wide range of tasks.
The Bear's Key Posted September 4, 2009 Posted September 4, 2009 i was not talking about education. i was talking about ability to perform a wide range of tasks. Sure, but a limit exists. Along the way somewhere you're gonna have to divvy up for efficiency.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now