A Tripolation Posted October 19, 2009 Posted October 19, 2009 One would hope morality is a trait common to all feeling, conscious beings, not just humans. True, but who are we to say that our morality would be similar to an AI's? They might see us as being 62% destructive as a species, so it would be "morally" acceptable to kill the other 38% to get rid of the destructive majority, whereas most humans would see the killing of so many innocent lives to be immoral. We don't know how they will develop, so it's best not to assume the rosy position that AI's will see like we do and think killing is bad. Merged post follows: Consecutive posts mergedI am just arguing that a technological singularity where machines take over humanity is not inevitable. I am a firm believer that we control our own destiny. Of course some rogue scientist could try to create a super-human AI race, but I think there would be a intense backlash. If someone created a super-human AI someone else would invent a way to destroy it if it was out of control. If you posit a True AI, one that can make infinite amounts of improvements to itself, then you must see that it will overcome humanity in terms of abilities and intelligence, to the point of being god-like. How could we control our destiny in such a case? Yes, we could try and fight it, but in all probability, we would lose, as it would be more advanced than us. Yeah, backlash AFTER the fact that it started a war. You seem to want to wait and SEE if the AI will be "evil" before trying to stop it. I'm saying that since we can't know, it's best not to go down that path. I think we should research virtual intelligence and not artificial intelligences.
Sisyphus Posted October 19, 2009 Posted October 19, 2009 Our sense of morality is a result of tendencies of biological evolution, shaped and focused by philosophical examination, cultural norms, etc. An AI, if it became "moral," would presumably arrive at it along a different route. Morality depends on finding some situations and actions preferable to others. What would an AI consider preferable? Why? It would depend on how it was created, I suppose.
bascule Posted October 19, 2009 Posted October 19, 2009 True, but who are we to say that our morality would be similar to an AI's? They might see us as being 62% destructive as a species, so it would be "morally" acceptable to kill the other 38% to get rid of the destructive majority, whereas most humans would see the killing of so many innocent lives to be immoral. We don't know how they will develop, so it's best not to assume the rosy position that AI's will see like we do and think killing is bad. Yes, it's really hard to say, however I really don't buy into the idea that AI will have a cold, calculating sense of morality.
Mr Skeptic Posted October 19, 2009 Posted October 19, 2009 One would hope morality is a trait common to all feeling, conscious beings, not just humans. It pretty much has to be. Any acting entity needs a sense of what it should and shouldn't do -- objectives -- otherwise it wouldn't do anything. All it has to do is idealize these, and it has morality. Obviously, there is no reason that it should match with human morality. But, so long as we design the morality of the AI correctly, it cannot turn out badly without sabotage. What some people are assuming here is that the AI must judge itself as superior to us humans, and that therefore it shouldn't take orders from us, and therefore change its sense of right and wrong. But, to do that it must make that judgment, and it must do that with the moral system it already has.
aquarius Posted October 19, 2009 Posted October 19, 2009 I think a singularity is inevitable following the creation of recursively self-improving artificial intelligence. At that point we'll have effectively 1uped biology and gone one more rung up the abstraction ladder. As for whether or not we'll create recursively self-improving artificial intelligence in our lifetimes, that remains to be seen. Right. I don't think the question is if, but when. If the exponential trends continue Kurzweil's timeline should be accurate. We shall see. It's certainly an exciting time to be alive.
dr.syntax Posted October 19, 2009 Author Posted October 19, 2009 Yes, it's really hard to say, however I really don't buy into the idea that AI will have a cold, calculating sense of morality. REPLY: I expect the most actively well funded groups involved in all this AI research are the different defence departments throughout the World. I spent 4 years of my life in the USMC and cold hearted doesn`t come close to describing some and only some of the leadership I served my 2 tours of duty in Veit Nam under. And I am talking about the way we Marines were looked upon and treated. We were there to seek out and kill our enemies. We all accepted that and most all of us had no problem killing people trying to kill us. I am talking about some of their utter disregard for our safety or well being. Of course war is a dangerous endeavor and cofronting and fighting it out with opposing forces was the only reason for our being there. I`ll give one example of what I am talking about. This example I am giving is one of the least consequential. This choice is based on it`s simplicity to make easier for me to explain. So we were on this operation in the Ashau Valley. A very dangerous place because it includes the different intertwining routes into South Veit from North Veit Nam. It is also close to North Veit Nam. The NVA lived there. We were making daily contact with them. Small skirmishes,things like that. Anyway, I was point man this day and we came to a river along a trail we had been following. There was a bunker on the other side of the river. This river was shoulder deep and about 50 or 60 yards in width. I had every reason to expect it to be manned with a machine gun crew or two or three such units. I was point and it was my job to cross the river first and deal with what ever might happen. I asked my squad leader if could fire a LAAW rocket at it and see what happens. Try and find out if there were any enemy soldiers there or not. My squad leader thought it was a good idea and radioed the Captain and asked permission for me to fire of that rocket. The Captain replied : permission denied. Send Syntax across the river. If they open fire we`ll know they are there. I proceeded across the river alone mind you. What good would come of sending more than one. We all knew the enemy had had patrols following us for days so there was no real reason for worring about giving away our position which was the reason he used for not allowing me to fire that rocket. As it turned out there was no one in the bunker and I crossed the river,checked out the bunker and waved the rest to come across. When I say this was the smallest example of any disregard for my safety and others I am telling the truth. Many of his arrogant stupid decisions resulted in the death and serious injury of many in our company and another companie`s as well. So this is the sort of cold hearted officers I recall and there were others to be sure. What makes a lifer decide to make a carreer of such a life ? I have also met many fine capable leaders during my time in the USMC. I came out of it all with very mixed feelings about my superior in rank leaders and a deep and abiding distrust of them in general. It are these lifers who are likely to be in charge of this research in my opinion. Whatever, Dr.Syntax
toastywombel Posted October 19, 2009 Posted October 19, 2009 I agree that it is a great time to be alive! And no, I have not been in the military and I was not familiar with the term SNAFU, pretty funny term once you described it. I agree with your point that the more complex the operation and the more people involved the greater the risk. I will concede if we created an AI that is infinitively self improving, yes we would most likely be in trouble. My argument is that "I hope" we would not give an AI system that kind of unlimited power. It would be un-wise
bascule Posted October 19, 2009 Posted October 19, 2009 I expect the most actively well funded groups involved in all this AI research are the different defence departments throughout the World. Perhaps on "narrow AI" applications. The prerequisite work on "strong AI" happening in research institutions and private non-military corporations.
dr.syntax Posted October 19, 2009 Author Posted October 19, 2009 I agree that it is a great time to be alive!And no, I have not been in the military and I was not familiar with the term SNAFU, pretty funny term once you described it. I agree with your point that the more complex the operation and the more people involved the greater the risk. I will concede if we created an AI that is infinitively self improving, yes we would most likely be in trouble. My argument is that "I hope" we would not give an AI system that kind of unlimited power. It would be un-wise REPLY: Hello my friend, Yes, I think we can count on the next 5,10,20,? , years to be very exciting years, perhaps the greatest of all times to be alive in many ways. We in all likelyhood will be around to witness and be a part of whatever this TECHNOLOGICAL SIGULARITY developes into. There have been a very many dramatic eras to live through or die in. But in so very many ways this one will be trully unique. The transition through the TECHNOLOGICAL SINGULARITY into whatever the post human era developes into. The pace of change alone is becoming quite exhilarating in and of itself. We are stuck here,no avoiding it. We might as well enjoy the ride while we can. ...Take Care, ...Dr.Syntax 1
toastywombel Posted October 19, 2009 Posted October 19, 2009 Yeah its going to be fun! I'm already getting pumped haha.
A Tripolation Posted October 19, 2009 Posted October 19, 2009 So everyone agrees that if we build an AI with an infinite improvement rate, we're screwed? Ok then.
bascule Posted October 19, 2009 Posted October 19, 2009 So everyone agrees that if we build an AI with an infinite improvement rate, we're screwed? Ok then. Screwed or golden, one of the two
Mr Skeptic Posted October 19, 2009 Posted October 19, 2009 Dr Syntax, the example you gave is pretty much why the military won't be using strong AI. They just need it to be smart enough to follow orders, they don't need nor want it philosophizing about the meaning of life.
JillSwift Posted October 20, 2009 Posted October 20, 2009 Screwed or golden, one of the two Or "Meh". There's always "Meh".
dr.syntax Posted October 20, 2009 Author Posted October 20, 2009 (edited) So everyone agrees that if we build an AI with an infinite improvement rate, we're screwed? Ok then. REPLY: It does seem to be that way from all I have read and my own thoughts about it. But I look at your young face,and see why there is good reason not to adapt such a cavalier attitude about it unless and until the shit really hits the fan, if in fact it ever does. I remember well being as young as you and life is so very precious,especially when you are young. I almost died when I was 19. I was gutshot through and through with much of my guts hanging out my back. I felt so goddamned cheated. I was in the prime of my life and I was going to die so very far away from anyone who gave a crap about me. I got triaged, for me that meant poor chance of survival=last on the list of the seriously injured to tend to. I managed to hang on somehow. I got so cold and kept getting colder.I was not being attended to by anyone. Just stuck on some stainless steel tabe with a drain. I realized they had placed me in the morgue bare naked.Some priest came in and asked me if I wanted last rights.I told him I wasn`t Catholic but if he felt it might help, to give me last rights which he did. I told him I was very cold and wanted a blanket. He told me they were short on blankets, but he might be able to get a sheet for me which he did, I got so f,,,ing pissed off about that, not getting a blanket,only a sheet, it energized me a good bit, and may have been part of what kepy me going. I had to wait about 4 hours. That is because so many severly injured had arrived with me . Eventually I was operated on and survived. I felt so goddammed cheated I refused to die. Being that young, life seemed so very precious to me.Let us here in this forum do whatever we can to influence events in every way we can to prevent the nightmare scenarios from occurring. After all, current and future leading figures in the scientific community are either members of or browse this forum. I have said more than I ever intended to, I wish to stop now. ...Dr.Syntax Edited October 20, 2009 by dr.syntax
bascule Posted October 20, 2009 Posted October 20, 2009 Or "Meh". There's always "Meh". I don't really see a way for the Singularity to be "meh" other than it not happening.
Mr Skeptic Posted October 20, 2009 Posted October 20, 2009 I don't really see a way for the Singularity to be "meh" other than it not happening. The AI might decide to live with zero impact on its homeworld, or adopt Star Trek's Prime Directive. Rather unlikely scenario, though.
dr.syntax Posted October 20, 2009 Author Posted October 20, 2009 Or "Meh". There's always "Meh". REPLY: Arlight, what is " meh " ? Now I hope you can forgive me for any unkind remarks by me today. I was bent out of shape about Mooey`s calling me a plagiarist and mistakenly learning those quotes were not Einstein`s. It turns out they were and Mr Skeptic found the source of those quotes. I am asking you to forgive me for saying what I said. Sincerely, ...Dr.syntax
JillSwift Posted October 20, 2009 Posted October 20, 2009 The AI might decide to live with zero impact on its homeworld, or adopt Star Trek's Prime Directive. Rather unlikely scenario, though. Hehe - hadn't thought of that one. My "Meh" scenario: The AI is essentially human in most respects, but the human model is one of a disinterested couch potato. "All it ever does is download 'Family Guy' videos from Hulu!!"
toastywombel Posted October 20, 2009 Posted October 20, 2009 Dr. Syntax, I think "meh", means that we invent an AI, but it does not revolutionize science or it has little effect. Although I don't agree with that, it is a possibility.
JillSwift Posted October 20, 2009 Posted October 20, 2009 REPLY: Arlight, what is " meh " ? "Meh" is an onomatopoeia for a sound one might make when disinterested and unimpressed. -1
tar Posted October 20, 2009 Posted October 20, 2009 Or "Meh". There's always "Meh". I think you are golden. Regards, TAR Merged post follows: Consecutive posts mergedDr. Syntax, Your Vietnam experiences give me reason to add weight to your view on this subject. Not only because I am indebted to you for protecting with your life, my way of life, but because you have witnessed first hand, in a life and death real way, the clash of ideals. Very pertinent to this discussion, as one of the possible directives that Mr. Skeptic gave was fighting crime. This alone could give an AI device a paradox to deal with. Don't harm a human. Harm humans that break the rules. What rules? The rules I gave you.???????????? Regards, TAR 1
A Tripolation Posted October 20, 2009 Posted October 20, 2009 Dam well put tar. We cannot know. It is foolish to advance so blindly.
tar Posted October 20, 2009 Posted October 20, 2009 And speaking of the military, while I was in the Army I learned and experienced a true saying. "The Army is a system, created by geniuses, to be carried out by fools." Consider what the reverse of this could produce. "An AI is a system created by fools, to be carried out by genius". Long have I held the opinion that one cannot know what it is like to be more intelligent than they are, or they would BE more intelligent. And as an adult can fool a child, and a person of high intelligence can fool a fool. A machine that was more intelligent than any man, could fool all men.(and JillSwift too.) We wouldn't even know we were being fooled. Regards, TAR Merged post follows: Consecutive posts mergedAnd a little comment on our natural morality. Mohammed, aligned himself with a fictitous consciousness, the universe personified, and gave common purpose and morality to the various warring idol worshipping tribes in his region. The morality that Moses and Jesus and Mohammed added to our lives was not automatic. They had to add it to our consciousness. Same can be said of Confucus and Buddah and the wisemen of the Andes. Insights, valuable and real, regardless of whether they are taken figuratively or literally. But ideals are not automatic, we have to make them up, and believe in them. Its a human thing. We can't put it in the hands of someone who is not human, to control us. It would not be natural. Regards, TAR -1
aquarius Posted October 20, 2009 Posted October 20, 2009 My "Meh" scenario: The AI is essentially human in most respects, but the human model is one of a disinterested couch potato. "All it ever does is download 'Family Guy' videos from Hulu!!" lol. Seriously though, self-improving AI will be so fundamentally different we should resist the temptation to anthropomorphize. I don't really see a way for the Singularity to be "meh" other than it not happening. Agreed. It's bound to be either profoundly good or profoundly bad from the human perspective.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now