Reaper Posted June 7, 2009 Posted June 7, 2009 I'm just going to ask a couple of simple questions here; what, exactly, would it mean for a future computer A.I. to be better than their human counterparts? What would such a system be like? There is much talk about creating such systems (after we figure out how to make it more human-like), and certainly many sci-fi speculations as to what might happen, but they don't really address the epistemological issues directly. Say we did create a system that did indeed surpass us in intellectual and cognitive abilities. What would that be like and how would we know?
Cap'n Refsmmat Posted June 7, 2009 Posted June 7, 2009 The way I've always imagined it is a system either with a vast database of knowledge or the ability to rapidly research information, and with the power to "comprehend" that knowledge and use it to form conclusions and make decisions far more rapidly than a human ever could. So, for example, a superhuman computer could have a great understanding of mathematics and be capable of solving mathematical problems faster than a human can (already possible in certain cases). The key difference between this and, say, Mathematica, is I would want an artificially intelligent system to be able to teach itself how to solve new problems and develop new abilities -- and then solve them better and faster than a human could.
Xittenn Posted June 7, 2009 Posted June 7, 2009 I've always wanted to be more capable of feeling both in terms of sensory perception and in terms of emotional sensation. I've given much thought to the design of the next generation and most my thoughts have been focused on developing these aspects of psyche. ...............not to contradict only to contrast............. We would know we have achieved their creation when they make us cry like a little child who has dropped their loly. And even more so when they wipe our tears off and give us a big warm hug..........
cameron marical Posted June 7, 2009 Posted June 7, 2009 I would want an artificially intelligent system to be able to teach itself how to solve new problems and develop new abilities -- and then solve them better and faster than a human could I conqer. Though, I am lost at how to make something so.
bascule Posted June 7, 2009 Posted June 7, 2009 I'm just going to ask a couple of simple questions here; what, exactly, would it mean for a future computer A.I. to be better than their human counterparts? You can read lots of speculation on that subject here: http://en.wikipedia.org/wiki/The_Singularity
Mafia Posted June 8, 2009 Posted June 8, 2009 Look at this article. I read a few more on New Scientist and a few other pages a few months ago that were better, but can't find the links to them. Sorry. :/
Reaper Posted June 8, 2009 Author Posted June 8, 2009 (edited) You can read lots of speculation on that subject here: http://en.wikipedia.org/wiki/The_Singularity That doesn't really answer my question though. And I tend not to trust predictions revolving around the so-called technological singularity because the whole thing seems too much like a non-sequitur argument in general. Merged post follows: Consecutive posts mergedThe way I've always imagined it is a system either with a vast database of knowledge or the ability to rapidly research information, and with the power to "comprehend" that knowledge and use it to form conclusions and make decisions far more rapidly than a human ever could. That's more along the lines of what I was thinking. The problems that revolve around that one are, of course, how we actually get a machine to "comprehend" something, and more importantly how we can actually tell that it's understanding what it's doing, rather than number crunching. But if the system is more intelligent, I wonder how it would "comprehend" something. Would a smarter machine be able to comprehend something faster, or understand subjects more deeply? Also, most people will claim that someone is smart when they notice that they happen to be either very creative or brilliant, and as such I would suppose that an intelligent machine would exhibit both those traits... So, for example, a superhuman computer could have a great understanding of mathematics and be capable of solving mathematical problems faster than a human can (already possible in certain cases). The key difference between this and, say, Mathematica, is I would want an artificially intelligent system to be able to teach itself how to solve new problems and develop new abilities -- and then solve them better and faster than a human could. I guess it really depends on what type of problems though. Computers can already solve problems that are impossible for humans to calculate, for example weather and climate models. But I don't know of any computers that can prove theorems, or come up with conjectures or theories. But yes, I do think this is a good point; any machine that has greater intelligence should, at minimum, be capable of doing these things at a higher level then that of humans. I will say this though, I'm not so certain that speed is as important a factor. Edited June 8, 2009 by Reaper Consecutive posts merged.
Cap'n Refsmmat Posted June 8, 2009 Posted June 8, 2009 http://en.wikipedia.org/wiki/Automated_theorem_proving What's the difference between a computer that gets intelligence by "number crunching" and a brain that gets intelligence by chemistry? In the end, I think that as long as it acts intelligent, it's close enough.
Dr. Posted June 24, 2009 Posted June 24, 2009 It is my opinion that a machine will never be able to perform on the same level as a human being without a human being providing it with some type of instruction. Even if it is possible, Artificial Intelligence is not needed anyways, what is it that you need a machine to accomplish with it's own "mind" that you cannot accomplish with your own mind and some initiative? I think this is one of the fundamental problems with the world today, laziness. Everyone wants artificial intelligence to do things for them, to make life easier, but I tell you that hard work has made man prosper for thousands of years now and hard work is what the world will return to one day, when the sciences finally go beyond their intended objectives.
Mokele Posted June 24, 2009 Posted June 24, 2009 Even if it is possible, Artificial Intelligence is not needed anyways, what is it that you need a machine to accomplish with it's own "mind" that you cannot accomplish with your own mind and some initiative? You need an AI to navigate a plane through a sustained 10-g turn. The human brain can't do it, mostly because in order to work, the human brain needs blood, which is all in the feet in such a turn. AI would be useful for exactly the same reason robots are useful - they can do tasks too dangerous for humans, can do tasks more cheaply, can function without rest, and can function in environments beyond human physiological tolerances.
Dr. Posted June 24, 2009 Posted June 24, 2009 yes but this still does not answer the question, Why do we NEED to navigate a plane through a 10-G turn. the answer is, we don't. The only things we really need are food, oxygen, and water, everything else is simply circumstantial. Man has created for himself problems that are beyond his own limitations, and rather than accepting those limitations, attempts to overcome them with machines.
Mokele Posted June 24, 2009 Posted June 24, 2009 yes but this still does not answer the question, Why do we NEED to navigate a plane through a 10-G turn. the answer is, we don't. We do, in order to kick the ass of the other country whose planes can. Man has created for himself problems that are beyond his own limitations, and rather than accepting those limitations, attempts to overcome them with machines. Yes, we have, and it's the best damn thing we've ever done. Go out into a crowded place. Look at all the people. Now, imagine 95% of them simply drop dead. That's what life was like prior to us "not accepting our limitations" and inventing vaccines and antibiotics. It's about making life better. If you don't want it, unplug your computer and go live in a cave somewhere.
Dr. Posted June 24, 2009 Posted June 24, 2009 (edited) haha a very ignorant response my friend, people have reverted back after using technology many times throughout history without experiencing anything like 95% of the population dying or anything even remotely close to that morbid conclusion. In fact, the Meiji Restoration of Japan, one of the happiest times, and healthiest, was the result of the reversion from the gun back to the "primitive" weapons such as the yari and the sword. Technology has ended more lives than it has saved, it is an undeniable truth. as a side note, have you read the news lately in regards to North Korea threatening to declare war on the united states? Edited June 24, 2009 by Dr.
Mokele Posted June 24, 2009 Posted June 24, 2009 haha a very ignorant response my friend, people have reverted back after using technology many times throughout history without experiencing 95% of the population dying or anything even remotely close to that morbid conclusion. Technology has ended more lives than it has saved, it is an undeniable truth. Go to Somalia and tell me that. Even there is spoiled, because we've eradicated previously horrific diseases like smallpox. You want to make these claims, present evidence, hard numbers. For mine, look into any medical textbook ever written.
Dr. Posted June 24, 2009 Posted June 24, 2009 I will present you with plenty of evidence when I return from my trip my friend, I am a history professor, these things I know, just as you know your sciences. I am interested in your "profession" (I do not know if it is your profession, but I am assuming) which is why I am here, albeit your arguments thus far have been less than stellar.
Mokele Posted June 24, 2009 Posted June 24, 2009 Bullshit. Show me your faculty page, and post your faculty email address, so we can directly email that address to confirm your identity. If you're a history prof, how come you've never heard of the Black Plague, tuberculosis (aka 'consumption'), smallpox, malaria, typhus, yellow fever, or influenza? If you've studied history and never once run into diseases, your education is worthless.
Dr. Posted June 24, 2009 Posted June 24, 2009 (edited) The atomic bomb dropped by the 'Enola gay', killed more people than the entire black plague, and as to your technology, what of the firearm? and I'm sure you can find lots of information regarding the fire-bombings of dresden and other plagues of technology. Edited June 24, 2009 by Dr.
Mokele Posted June 25, 2009 Posted June 25, 2009 Hiroshima: 140,000 Nagasaki: 80,000 Firebombing of dresden: 250,000 World War 2 (civilians included): 60,000,000 Malaria: 1,000,000 per year Black Death: estimates range from 75,000,000 to 200,000,000 deaths. Influenza: 36,000 in the US alone per year Tuberculosis: 1,577,000 per year So, according to the numbers, you'd have to drop 6 nukes on Hiroshima-sized populations per year to equal malaria alone, and 9 more to equal tuberculosis.
mooeypoo Posted June 25, 2009 Posted June 25, 2009 yes but this still does not answer the question, Why do we NEED to navigate a plane through a 10-G turn. the answer is, we don't. So we can go to space, for one. The only things we really need are food, oxygen, and water, everything else is simply circumstantial. Humans aren't eating-breathing-drinking(-youforgotonemorethinghere) machines, they are curious beings aspiring to figure how things work and why, as well as better themselves. We want to do more than eat, breath and drink (and whatever else). We want to be more. More than, say, machines. That's why we need those things. If you disagree, you're more than welcome to go live in a secluded cave in the middle of a deserted island, hunt your own food with your bare hands and drink the stream water. Will you be satisfied with that, though? Man has created for himself problems that are beyond his own limitations, and rather than accepting those limitations, attempts to overcome them with machines. That's ridiculous. Man created medicine so we can cure disease and be healthier. Man created clothing so we can be warm. Man created mass food production so hunger is (theoretically, hopefully) reduced. Man created ethics, so we cooperate in a society. Man has created a lot, and only *some* of that lot caused problems. That doesn't mean that men created problems. Be serious. ~moo
bascule Posted July 4, 2009 Posted July 4, 2009 people have reverted back after using technology many times throughout history without experiencing anything like 95% of the population dying or anything even remotely close to that morbid conclusion. Population has also been increasing exponentially thanks to technology. We could not abandon our modern way of life without billions of people dying. There wouldn't be enough food or water or ways of transporting it. In fact, the Meiji Restoration of Japan, one of the happiest times, and healthiest, was the result of the reversion from the gun back to the "primitive" weapons such as the yari and the sword.Technology has ended more lives than it has saved, it is an undeniable truth. What? The Meiji period brought with it the industrialization of Japan. You're talking about the time when Japan started getting its first railroads, something that has certainly remained an essential part of their lifestyle ever since.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now