Kedas Posted October 28, 2005 Posted October 28, 2005 Isn't that like saying "I move using my legs therefore moving is made out of legs"? No, it would be: "I move using my legs, If you damage your legs, moving is likewise impared"?
ecoli Posted October 28, 2005 Posted October 28, 2005 They can but only because they are amazingly simple compared to the human brain... a chip can't even come close to the complexity. I also have another point here: Should it not be impossible for us to make a machine more intelligent than us - to do so it would mean we would have to be more inteligent to create something so its a paradox when you try to create something more intelligent than you :S Anyone see what I mean there? Cheers' date=' Ryan Jones[/quote'] Besides the fact that we're talking about computers and machines being more intelligent then humans in general, not the specific person who created that machine. And besides, why does the person who create AI have to be intelligent (well, obviously to be smart enough to make it, but...) if you created a program that was self evolving, that could learn independantly from from humans could program into it, then tha computer may not start out as intellegence as humans, but it could BECOME more intelligent. If AI will exist, this is how I think it would happen. The computers learn how to learn, then to to create new knowledge, new machines that are even smarter then themselves... "I speak of none other than the computer that is to come after me," intoned Deep Thought, his voice regaining its accustomed declamatory tones. "A computer whose merest operational parameters I am not worthy to calculate — and yet I will design it for you. A computer which can calculate the Question to the Ultimate Answer, a computer of such infinite and subtle complexity that organic life itself shall form part of its operational matrix. And you yourselves shall take on new forms and go down into the computer to navigate its ten-million-year program! Yes! I shall design this computer for you. And I shall name it also unto you. And it shall be called ... The Earth."
Douglas Posted October 28, 2005 Posted October 28, 2005 Quote: 5614you cant create something more clever than you, if it is based on your brain!!!!! But that limitation is easily overcome if we can program [/b']something that can learn and apply knowledge like we can. But you'd have to program it so it knows what to learn, and you don't know what it should know what to learn.
ecoli Posted October 28, 2005 Posted October 28, 2005 But you'd have to program it so it knows what to learn, and you don't know what it should know what to learn. why would you? You need a program that knows how to learn, and can find the important information... we can't know what a more intelligent being then ourselves should learn or will learn.
bascule Posted October 28, 2005 Posted October 28, 2005 Any of you claiming that it's impossible for us to design a consciousness which is better than own have some rather ill founded ideas about how consciousness actually operates. I suggest you pick up a copy of Daniel Dennett's book Consciousness Explained in which he details an "Empirical Theory of Mind" based upon countless scientific research experiments. We have a genetic blueprint for consciousness already sitting inside of computers in the form of our own genetic blueprint. When we produce computer models capible of growing lifeforms from a digital copy of their genes, we will be able to produce a model of a human being by "growing" one inside of a computer. Once we have this, we can make the most extensively detailed analysis of the operation of the human brain which has ever been accomplished, because we'll be able to produce complete snapshots of the brain in action. From that we can reduce consciousness to a mathematical model of its operation. Once we have this, we can look at fundamental design problems and bottlenecks. We have intelligence on our side; natural selection did not (sorry IDiots). There are certainly major problems with the way our consciousness operates, and many of these things are second nature to computers (i.e. math is hard, memorizing things is hard), so that when we have a computer running a mathematical model of consciousness itself, augmenting its design to accommodate the niceties of modern computers should be a relatively easy task, once the mathematical model (or even just a small portion necessary to interface with it) has been understood. As it stands, even without the luxury of growing a human brain inside of a computer we have a mathematical model of how certain parts of the brain operate. A mathematical model of the hippocampus, the center of short term memory, has already been constructed.
Bio-Hazard Posted November 2, 2005 Posted November 2, 2005 I can't see it happening anytime soon, if at all. We don't even know half the secrets of the brain, and I don't think that we'll be able to create truly intelligent systems until we do. Hmm... I think it will be done eventually.. but I don't like the idea of A.I... I mean the reasons humans rule the world is because of their physical adaptability and their intelligence, not to forget communication. Give robots that chance and they just may take over the world if they see humans as lesser.. but they have no psychobiological drive so I don't see them doing that unless its programmed into them. I still like my robot badger idea.. gastrobots running off mushrooms and using snakes to attack people.
cosine Posted November 3, 2005 Posted November 3, 2005 I suggest that anyone interested in A.I. check out the work of Pei Wang (and other works he may reference). He is doing work on Non-Axiomatic Reasoning Systems. He talks alot about Human Logic vs. Mathematical logic, among MANY other relevant topics. http://www.cogsci.indiana.edu/farg/peiwang/papers.html
Nerdboy5000 Posted November 4, 2005 Posted November 4, 2005 it is impossible by the fact that a computer's intelligence is based on the person who built the computer. Unless its like "2001:a space odyssy" where a computer can learn.
Nerdboy5000 Posted November 4, 2005 Posted November 4, 2005 it is impossible by the fact that a computer's intelligence is based on the person who built the computer. Unless its like "2001:a space odyssy" where a computer can learn.
calbiterol Posted November 4, 2005 Posted November 4, 2005 Excuse me if this sounds mean, but hence the phrase "artificial intelligence." Any and all intelligent beings have an inherent capability to learn. As such, it should be a given that any true AI construct would also be able to learn - thereby surpassing the intelligence of its human creator(s).
ecoli Posted November 4, 2005 Posted November 4, 2005 Excuse me if this sounds mean, but hence the phrase "artificial intelligence." Any and all intelligent beings have an inherent capability to learn. As such, it should be a given that any true AI construct would also be able to learn - thereby surpassing the intelligence of its human creator(s). I think it would have to learn in order to be considered intellegent... otherwise they're merely calculators. But how would one define learning. Perhaps a machine can be built to research information by itself and integrate knowledge. But that could hardly be considered learning if the machine doesn't apply that knowledge to gain/create new knowledge, n'est pas?
cosine Posted November 6, 2005 Posted November 6, 2005 Dr. Pei Wang at Indiana State University is currently doing a lot of interesting research on Artificial Intelligence by talking about NARS, Non Axiomatic Reasoning Systems. Computers use PAS, Pure axiomatic systems, where all knowledge is known for any question, and if a PAS can't answer the question, it is the questioner's fault for not asking a good question. Humans use NARS, where knowledge can be insufficient for a problem it has to solve. PAS is purely deductive. NARS are more inductive. I recommend that you check out Pei Wang's paper: Cognitive Logic vs. Mathematical Logic Interestingly, as Pei Wang has learned about NARS, he has built a computer demonstration using NARS, which is also on his website: Pei Wang's Publications And here is a thread currently on the General Mathematics Forum where I talked a little about it some more: http://www.scienceforums.net/forums/showthread.php?p=223366#post223366
herpguy Posted December 9, 2005 Posted December 9, 2005 I think it depends on how we look at intelligence. creativity will never come in artificial intelligence, but I think that maybe there is a way a robots can learn what people and other robots teach it. I saw a program on the discovery channel that showed these robotic cars telling eachother where to go. If we can master robotic communication then it may be possible for artificial intelligence to become better at logical things than us.
Mart Posted December 9, 2005 Posted December 9, 2005 Originally Posted by Kedas"I move using my legs, If you damage your legs, moving is likewise impared"? No, it would be: "I move using my legs, If you damage your legs, moving with legs is likewise impared"
Mart Posted December 9, 2005 Posted December 9, 2005 Originally Posted by basculeConsciousness is made out of neurons. If this is taken literally then consciousness should be physically detectable.
bascule Posted December 9, 2005 Posted December 9, 2005 creativity will never come in artificial intelligence Why? If this is taken literally then consciousness should be physically detectable. Are you saying it isn't? I think we just don't have the technology yet.
brad89 Posted December 11, 2005 Posted December 11, 2005 Creativity is only judged by a human, there is no mathematic calculating method involved. Who is to say that if a machine developed a theory based on credible evidence that it isn't creative?
Cognition Posted December 21, 2005 Posted December 21, 2005 Creativity is only judged by a human, there is no mathematic calculating method involved. Who is to say that if a machine developed a theory based on credible evidence that it isn't creative? There is a very good book about creativity by Margaret Boden, an English AI specialist and Philosopher. It is a great book, and it shows that creative behavior is not mystical as most people would like to believe. I think computers can be Creative and computers can be Artificial Intelligent. I know for a fact that I have created Artificial Intelligence and it is really not that hard. But ofcourse it is simple AI, a mechanism that learns to generalize from examples and when it is presented with something "new" it is able to categorize that almost perfectly. That is a definite form of intelligence and since it is not done by the human brain, it is AI. But I do not think that computers based on the Von Neumann structure will be able to be as intelligent as human beings, even when they are programmed with all the knowledge the world or even with immense processing power. I have another idea also. In the beginning of this thread it was argued that there will never be Machines more intelligent then us, because everything will be based on our brains....That is just the way to make a machine more intelligent then we are: we know that humans have a limited short time memory and thinking (creatively) is usually making associations and seeing similarities between two things. If we would understand what it is that makes our short term memory so limited, and we understand much more of the processes of reasoning, analogy and thinking, then we will definitely be able to create extremely artificial intelligence.
raazi Posted January 13, 2006 Posted January 13, 2006 I said never because I briefly studied AI for my under grad in CS course and here is my reason: AI development heavily relies on mathematical formulas and programming techniques that we, the humans, discover and create. For that reason, we are setting up a mathematical and programming limits according to our intelligence therefore the AI we create may not exceed beyond our intelligence.
bascule Posted January 13, 2006 Posted January 13, 2006 I said never because I briefly studied AI for my under grad in CS course and here is my reason: AI development heavily relies on mathematical formulas and programming techniques that we' date=' the humans, discover and create. For that reason, we are setting up a mathematical and programming limits according to our intelligence therefore the AI we create may not exceed beyond our intelligence.[/quote'] That's horribly flawed reasoning. If nothing else, we can create a molecular model on a computer of human embryological development from egg to adulthood. We'd then be able to observe everything happening in the brain, because it's all just a computer simulation. At that point, you have artificial intelligence. There's no inherent mathematical limitation to a conscious entity understanding how consciousness works. Like any other problem it can be broken down into understandable bits and pieces, then you just have to know how everything integrates.
ecoli Posted January 13, 2006 Posted January 13, 2006 I said never because I briefly studied AI for my under grad in CS course and here is my reason: AI development heavily relies on mathematical formulas and programming techniques that we' date=' the humans, discover and create. For that reason, we are setting up a mathematical and programming limits according to our intelligence therefore the AI we create may not exceed beyond our intelligence.[/quote'] You teach a computer how to learn, and it doesn't need us to program it anymore.
[Tycho?] Posted February 6, 2006 Posted February 6, 2006 I said never because I briefly studied AI for my under grad in CS course and here is my reason: AI development heavily relies on mathematical formulas and programming techniques that we' date=' the humans, discover and create. For that reason, we are setting up a mathematical and programming limits according to our intelligence therefore the AI we create may not exceed beyond our intelligence.[/quote'] How can such poor reasoning come after studying AI? Its faily obvious that intelligence is not something that we will just program in; simply using stimulus and response sort of programs can simulate intelligence, but is quite different. What we need to figure out is how to program things that can learn. Currently we have no idea how to do this. But (compartively) simple animals like rats can learn things, so I'd say AI will come about by following in natures footsteps; examining how animal intelligence evolved, and then writing programs that can mimic that process, ie that of mutation and selection. It may take quite a while, but we have tons of examples from the natural world, in time we'll be able to reverse engineer the learning process, if we can't figure it out on our own before that.
theorein Posted February 13, 2006 Posted February 13, 2006 Will science ever create A.I. more intelligent than ourselves? When I see a robot walking like a human being, I don't know how to react. Why would you spend millions of dollars building a robot that walks like a human when this is NOT the best method of mobility? Why build or create AI like a human when we know that humans have a lot of shortcomings? We will never accomplish anything when we try to imitate something that is imperfect. We have supercomputers that can do immense calculations in seconds. But this is not AI. To be able to create AI we must first come to a conclusion about what is truly Intelligence. Take for an instance our PC. Even though it is a sophisticated machine, capable of doing multi-tasks actually it is a dumb machine. It was designed that way. Let your 2 year old unattended for 2 minutes with your PC and see what happens. The PC was made for someone who is well behaved. The operating system itself needs lots of maintenance. Now a day making spear parts itself is a multi-million dollar industry. We build things that fail. We have no idea what intelligent is. How can we create something we can’t define? theorein
NLN Posted March 17, 2007 Posted March 17, 2007 There are two types of AI, and they are not generally distinguished between in the media: soft AI and hard AI. Soft AI includes most of the AI you see out there: systems that are designed to mimic certain kinds of human behavior. Robots like Asimo fall into this category, as do robot vacuum cleaners, internet search engines, speech synthesizers and voice recognition systems, computer vision systems, neural nets and data miners. Hard AI research, on the other hand, strives to actually create systems that can think—first, as an insect or an animal might—and later as a human does. Some of these researchers are trying to engineer intelligence from scratch, while others are attempting to understand and model the human brain, and reproduce it artificially. For these individuals, building a sentient machine means producing a system that experiences the world as an infant human would—sensing and interacting with its environment—learning from trial and error and "growing up" over time. Since this goal is far more complex and will take longer to achieve, it is not as well funded as soft AI, and only the smartest and truly dedicated individuals are working on it. I have made it my goal to seek out the people who are working on hard AI, and learn as much as I can about what they are doing. For those of you are truly serious about the subject, I recommend the following articles: Machines Like Us Embodied Cognition Saving Machines From Themselves: The Ethics of Deep Self-Modification
bascule Posted March 17, 2007 Posted March 17, 2007 There are two types of AI, and they are not generally distinguished between in the media: soft AI and hard AI. Soft AI and hard AI? I don't know where you got those terms, but they aren't in general usage. The correct terms are strong AI (SAI) and weak AI. Strong AI generally refers to self-modeling systems. There exists a smaller dichotomy: general intelligence versus narrow intelligence. General intelligence systems can be applied to all problems, whereas narrow intelligence systems are fitted to suit a particular problem. Some examples of these would be Jeff Hawkins' Hierarchical Temporal Memory (HTM) versus a Bayesian spam filter. The latter does simple pattern matching using a simplistic learning model. Some people have suggested the term "synthetic intelligence" (SI)
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now