sergeidave Posted April 8, 2009 Posted April 8, 2009 (edited) I've been always perplexed when it comes to the intricate difficulties trying to explain scientific concepts. This is one that I've been chewing on for some time and it would be great to hear your opinions! Most of us are very familiar with all the interesting debates around artificial intelligence, strong ai, self-awareness, conciousness, feelings, etc. And there is a lot of talk about stuff like "we will never make a learning, self-aware, with feelings, etc. computer program, simply not possible", etc. So I would like to venture with this question: If Nature (Evolution, nothingness, the universe, randomness, whatever you want to call it), which is not supposed to be a sapient entity in itself, was able to "come up with us", human beings, intelligent, with feelings and self-awareness, how come we, the most intelligent known entities in the universe cannot or will not be able to at least match ourselves artificially?? Is it the approach/paradigm (software, procedural, object-oriented, etc) that we are using what's wrong?? I'm positive that I'm not the first to come up with this question, I'm nobody. So, what is the current research about this? And one last question, for good measure: Is there a way to try to speculate how long would it take us to acomplish such a feat, when we theorize that it took nature 4.7 billion years to "create" something as amazing as us, human beings?? Thanks, guys! I look forward to your comments! Edited April 8, 2009 by sergeidave correcting misspellings.
BAC Posted April 8, 2009 Posted April 8, 2009 it would take a while perhaps thousands of years but without a doubt we can create a computer advanced as ourselfs we are complex beings so this will take us a while mainly becuse we are not sure completly how the brain works as a whole yet...
zule Posted April 8, 2009 Posted April 8, 2009 I don’t think we will never get to create a being as us. For creating the brain of a being as us, we only have our own brain, and I believe that to copy something, we need a much more complicated tool than the thing we copy. So, I believe that, if the world still exists for a very long time, perhaps evolved humans with one much more powerful brain than ours could be able to create a being as us, but not a being as them.
sergeidave Posted April 8, 2009 Author Posted April 8, 2009 I don’t think we will never get to create a being as us. For creating the brain of a being as us, we only have our own brain, and I believe that to copy something, we need a much more complicated tool than the thing we copy. So, I believe that, if the world still exists for a very long time, perhaps evolved humans with one much more powerful brain than ours could be able to create a being as us, but not a being as them. That's the problem I find. If human beings evolved without any intelligent intervention, why can't we create something as complex and marvelous, even though we have the materials and intelligence. I mean, it's as simple as this: Nature is not an intelligent being and yet we are here, evolved by the laws that exist in Nature; and yet we, entities that DO possess intelligence, will not be able to create something as complex or more than ourselves??
Mokele Posted April 8, 2009 Posted April 8, 2009 You assume that just because we haven't yet, we can't. Clearly this is baseless and likely wrong.
sergeidave Posted April 8, 2009 Author Posted April 8, 2009 You assume that just because we haven't yet, we can't. Clearly this is baseless and likely wrong. I assume you are replying to zule, as I never said we can't, but rather, I'm asking 'why can't we?' Of course, there is more to that too.
Mokele Posted April 8, 2009 Posted April 8, 2009 It's fairly simple: we simply don't have the technology yet.
Sisyphus Posted April 8, 2009 Posted April 8, 2009 Depending on who you ask, artificial intelligence as complex as a human brain is pretty much right around the corner. So yeah, we're doing it, and we're taking a whole lot less time than unguided biological evolution. And yes, the reason we're confident it's possible is because we exist.
Paralith Posted April 8, 2009 Posted April 8, 2009 I agree with Sisyphus, and my boyfriend is one of those who thinks it's right around the corner. Technology develops exponentially, and a lot of things that the average member of the public thinks is sci fi are being worked on today and may be generally available quite soon. He just told me the other day that in a year or two a research group plans to put together an artificial neural network that will be equal in neuron number to a brain 1/10 the size of a human's. Just slamming a bunch of neurons together isn't the same thing as a brain of course, but the technology to replicate the processing power of a brain is fast improving. There are in fact a large body of computer scientists who think singularity, the point where machines/computers are intelligent enough to make better versions of themselves, is only 40 -50 years away. My boyfriend is a software developer and he firmly believes his job will be obsolete in 40 years because computers will program themselves. Then progress will move even faster because artificial intelligences will be much faster and more efficient at designing themselves than humans ever could be.
A Childs Mind Posted April 8, 2009 Posted April 8, 2009 Because were content with wut we have. we are still so yonug and have no idea wut is instoer for us in the future.
Kaeroll Posted April 10, 2009 Posted April 10, 2009 I have a feeling that if we do ever create a true AI, it will be through 'artificial evolution'. I'm not sure of the correct term for this, but I simply mean application of an algorithm resembling natural selection to a program or machine. It's already done in many fields with great success. The downside of it is we may not fully understand the product's workings once it arrives.
GDG Posted April 10, 2009 Posted April 10, 2009 I have a feeling that if we do ever create a true AI, it will be through 'artificial evolution'. I'm not sure of the correct term for this, but I simply mean application of an algorithm resembling natural selection to a program or machine. It's already done in many fields with great success. The downside of it is we may not fully understand the product's workings once it arrives. I think you're thinking of Evolutionary Computation and genetic algorithms. The second concept there is the "Singularity", first posited by Vernor Vinge. The idea is that once we (a) succeed in creating artificial intelligence, and (b) manage to make the AI smarter than humans, the AIs will rapidly make themselves more and more intelligent, at an exponentially increasing rate, until they've left us far behind. The state of the world afterwards is literally unimaginable. Despite being unimaginable, it has become a popular trope for science fiction authors. Ray Kurzweil is probably the author most known for writing seriously about the singularity.
Kaeroll Posted April 10, 2009 Posted April 10, 2009 Thanks for clarifying that term GDG. That's precisely what I was referring to.
cameron marical Posted April 11, 2009 Posted April 11, 2009 i think we need to learn more about the brain and thought process itself before ai, but once we truly know it, i think well be able to make equal intelligence of us if not greater also.
lucaspa Posted April 15, 2009 Posted April 15, 2009 If Nature (Evolution, nothingness, the universe, randomness, whatever you want to call it), which is not supposed to be a sapient entity in itself, was able to "come up with us", human beings, intelligent, with feelings and self-awareness, how come we, the most intelligent known entities in the universe cannot or will not be able to at least match ourselves artificially?? Because we are not as smart as natural selection. We turn to natural selection when the design problem is too tough for us, do some searching under "genetic algorithms". http://www.genetic-programming.com However, some groups are now trying to evolve AI, not manufacture it: http://www.discover.com/aug_03/gthere.html?article=feattech.html I think this approach has a much better chance of success than the direct manufacture approach.
Sisyphus Posted April 15, 2009 Posted April 15, 2009 I assume a designed algorithm would be quite different from natural selection, though. For one thing, the "goal" would be different. With natural selection, the "goal" is always passing on genes to successful offspring as many times as possible. For an evolved AI, we're going to have to figure out different criteria (I assume), which might make creating something akin to a human intelligence tricky. More trivially, there would also be much less time between generations, and less randomness (in nature, the individual can only have a better chance of survival based on genetics, but its still just a chance).
lucaspa Posted April 15, 2009 Posted April 15, 2009 I assume a designed algorithm would be quite different from natural selection, though. For one thing, the "goal" would be different. With natural selection, the "goal" is always passing on genes to successful offspring as many times as possible. The "goal" of natural selection is not "passing on genes to successful offspring". That's a result of the goal. The "goal" of natural selection is to preserve the designs that do best in the struggle for existence in that particular environment. So "designed algorithms" are recreations of natural selection. What they do is set the environment and then natural selection finds designs that do well in that environment. For AI research, the "environment" would be cognitive problem solving and social skills (just as it was for hominids during evolution). Together that might produce the ability to "think" in a machine. More trivially, there would also be much less time between generations, and less randomness (in nature, the individual can only have a better chance of survival based on genetics, but its still just a chance). Randomness would be the same or a bit larger in genetic algorithms. After all, there is only so much change you can make in a protein and still have it function. Here, the ability to change circuits would be greater. Yes, an individual might be killed by something unrelated to its new design (adaptation). This is called "nonselective mortality". But in the long run in nature that is moot. Populations still evolve even if 99% of mortality is nonselective. "Thus much, perhaps most, of the mortality suffered by a population may be random with respect to this locus or character [hoofs in horses, for example] These nonselective deaths may be contrasted with selective death, those that contribute to the difference in fitness between genotypes. Even if most mortality is nonselective, the selective deaths that do occur can be a potent source of natural selection. For instance, genetic differences in swimming speed in a small planktonic crustacean might well not affect the likelihood of being eaten by baleen whales, which might be the major source of mortality. But if swimming speed affects escape from another predator species, even one that accounts for only 1 percent of the deaths, there will be an average difference in fitness, and swimming speed may evolve by natural selection." Futuyma, Evolutionary Biology, pg 368.
Sisyphus Posted April 15, 2009 Posted April 15, 2009 The "goal" of natural selection is not "passing on genes to successful offspring". That's a result of the goal. The "goal" of natural selection is to preserve the designs that do best in the struggle for existence in that particular environment. Alright, but "doing best in the struggle for existence" is necessarily defined by having the most and/or most successful offspring, right? It seems like you could flip it around and call that the goal, in which case preserving the best designs would be the side effect. Which I guess is one reason why it's pointless to talk about "goals" at all, since there aren't any. But there would be goals with an artificial process, so we would have to, as you say, design an environment with that goal in mind, and pretty much hope for the best. I'm sure it would yield something that can solve problems and interact convincingly with humans, but it seems like it would almost certainly end up doing those things in ways quite differently than human brains do. Or am I just totally off base? EDIT: How about modelling an actual human brain as closely as possible, and applying evolutionary algorithms to that model? (Is that what we've been talking about all along?)
lucaspa Posted April 15, 2009 Posted April 15, 2009 Alright, but "doing best in the struggle for existence" is necessarily defined by having the most and/or most successful offspring, right? That is going to be the result, not the definition. Having more offspring, and thus changing the allele frequency in the next generation, gives us an objective way to determine who was doing better. But the goal of natural selection is to adapt the population to a particular environment or, IOW, find the best designs available for that environment. It seems like you could flip it around and call that the goal, in which case preserving the best designs would be the side effect. Which I guess is one reason why it's pointless to talk about "goals" at all, since there aren't any. Having more offspring itself is not a "goal". If it were, every species would produce a huge number of offspring, but they don't. Salmon and sturgeon produce massive numbers of offspring -- tens of thousands per breeding pair -- but only because that is the design that allows survival in the struggle for existence. Other species produce few offspring but give them a lot of parental care, because that design does better in the struggle for existence in their environment. The "goal" of natural selection is to produce the best design available for that particular environment. Which is exactly the "goal" in artificial selection and genetic algorithms. The only difference is that humans set the environment. I'm sure it would yield something that can solve problems and interact convincingly with humans, but it seems like it would almost certainly end up doing those things in ways quite differently than human brains do. Or am I just totally off base? Probably. After all, there are usually several designs that will give the same overall function in nature. It's called convergent evolution. BUT, the question was about producing artificial intelligence comparable to human intelligence, wasn't it? It wasn't about producing intelligence in exactly the same way that human brains do. Altho, if the circuits were in a neural network, the way that the machine produced intelligence would be very close to how the human brain does it. EDIT: How about modelling an actual human brain as closely as possible, and applying evolutionary algorithms to that model? (Is that what we've been talking about all along?) That would be 1) unnecessarily restrictive for a non-biological machine and 2) undoable at the present, since we don' know how the human brain does things. There are still several different theories out there on how the brain does thinking. A SciAm article in the last 2 years discussed 2 of the theories. No, we have not been talking about modelling the human brain. The biological connections are not exactly like the circuits on chips. Restricting AI research this way would be counter-productive. Better to set the environment and let natural selection find a way for the machine to get there instead of limiting it to just one path.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now