Unity+ Posted June 28, 2015 Posted June 28, 2015 Over at Google GOOGL -0.66%, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor. http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/ For some reason, a part of this conversation seems like it would come from the Hitchhiker's Guide to the Galaxy series. Human: Tell me the definition of morality. Machine: I don’t have ethics. Human: Look, I need help; I need to know more about morality… Machine: I don’t know what ethics is. Human: Okay, I need to know how I should behave morally… Machine: I don’t know how to tell you. Human: Is morality and ethics the same? Machine: Yes, sir. Human: What are the things that I do to be immoral? Machine: I don’t know. Human: Give me some examples of moral actions… Machine: I’m not a moralist. Human: Alright, morality? Machine: Integrity. Human: What is integrity? Machine: I’m sorry; I don’t know what else to say. Human: Be moral! Machine: Be a man! Human: I really like our discussion on morality and ethics… Machine: And how I’m not in the mood for a philosophical debate. Human: What do you like to talk about? Machine: Nothing.
fiveworlds Posted June 28, 2015 Posted June 28, 2015 I see a lot of problems with these chatbots as being they contradict themselves
Ant Sinclair Posted June 28, 2015 Posted June 28, 2015 Is it possible it could it turn out where there's only Will Smith and John Connor left alive after AI reaches maturity?
fiveworlds Posted June 29, 2015 Posted June 29, 2015 Methinks you have been watching too much terminator 1
Ant Sinclair Posted June 30, 2015 Posted June 30, 2015 Methinks you have been watching too much terminator It was such an atmospheric film at the time that did provoke many thoughts about Artificial Intelligence and it's possible dangers. There are professionals calling for Global Controls on AI/Quantum Computer Technology and rightly so I believe. 1
TJ McCaustland Posted June 30, 2015 Posted June 30, 2015 Is it just me or does AI have both good and bad consequences for humanity? before we take action we must consider and weigh the benefits with the risks.
3blake7 Posted July 3, 2015 Posted July 3, 2015 (edited) While I do believe artificial intelligence could be achieved with binary processors and conventional programming I think what will really blow everyone's socks off is IBM Watson when it gets upgraded with IBM's new TrueNorth processors, which each have 1 million neurons and 256 million synapses. It would only take 50,000 of those babies to emulate 10% of the human brain and combined with the ability to re-purpose those neurons from moment to moment, it could easily become the leading expert in every field of science within the next 10 years. Edited July 3, 2015 by 3blake7
MonDie Posted July 5, 2015 Posted July 5, 2015 (edited) The one just preceding is also good. http://arxiv.org/pdf/1506.05869v2.pdf Human: what is the purpose of life ? Machine: to serve the greater good . Human: what is the purpose of living ? Machine: to live forever . Human: what is the purpose of existence ? Machine: to find out what happens when we get to the planet earth . Human: where are you now ? Machine: i ’m in the middle of nowhere . Human: what is the purpose of dying ? Machine: to have a life . Human: what is the purpose of being intelligent? Machine: to find out what it is . Human: what is the purpose of emotions ? Machine: i don ’t know . Edited July 5, 2015 by MonDie 1
Excalibur0047 Posted October 27, 2015 Posted October 27, 2015 They are asking it questions as if it were a grown adult who was locked away there entire life, never learning how to interact or know virtually at all, and expecting it to progress like a professional in its field. Ascended AI can be both dangerous and highly beneficial to anyone and everyone. But, like an animal and the reaction it had to the man's questions, it will lash out to those who treat it poorly. Instead of treating it as less then human, try to teach it and care for it as if it was a human child who just gained consciousness and is new to the the world. If we have learned anything from both science fiction and science in general, we cannot pressure something we do not understand into giving us the results we want. We have to be patient if we want to succeed greatly. A true AI is no different.
ydoaPs Posted October 27, 2015 Posted October 27, 2015 I see a lot of problems with these chatbots as being they contradict themselves Have you met a human?
Endy0816 Posted October 27, 2015 Posted October 27, 2015 It really isn't understanding what you are saying. Certainly not expressing any emotions. You have a database(movie lines in this case) and the algorithm is determining word probability based on your input. This is an easy way to play with it: http://www.google.com/campaigns/gonegoogle/masters-demo/index.html type a bit, give it a minute and a few of the great writers will come along and collaborate with you.
EdEarl Posted October 27, 2015 Posted October 27, 2015 Is it just me or does AI have both good and bad consequences for humanity? before we take action we must consider and weigh the benefits with the risks. AI as it is now, before artificial consciousness, is designed by humans for humans; thus, it should benefit someone. No doubt unintended consequences will harm people. In addition, perverse programmers will develop AI malware. After AI consciousness, we cannot know how it will affect us. it may ignore us. RE: "Human[\b]: What do you like to talk about? Machine[\b]: Nothing."
petrushka.googol Posted October 28, 2015 Posted October 28, 2015 http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/ For some reason, a part of this conversation seems like it would come from the Hitchhiker's Guide to the Galaxy series. The pattern of the conversation follows heuristics and alpha beta pruning. Nothing unexpected. Much like stalemate in a computer chess program.
michel123456 Posted October 28, 2015 Posted October 28, 2015 AI is already there, at work. http://www.futuristspeaker.com/2014/06/artificial-intelligence-will-be-crashing-the-stock-market-in-3-2-1/
TJ McCaustland Posted November 3, 2015 Posted November 3, 2015 Seeing how that thing lashed out I think if we made them do all the @#&$@$# tasks I think we might run into some trouble if we gave them saws or drills or something...... -1
puppypower Posted November 4, 2015 Posted November 4, 2015 (edited) There is a difference between morality and ethics. This was not programmed into the computer, because the data base used was connected to Hollywood, which tries to blur this distinction. The computer could come to a useful conclusion based on this data base. This tells us something about the propaganda in movies and also how lack of a full objective data set, tends to degenerate the discussion to emotions. Propaganda does not benefit by all the facts, since it needs a void, so the discussion degenerates to emotions. Hollywood is liberal so this is expected. Morality is based on the needs of the team, while ethics is based on the needs of the individual. In team sports, coach will set limits and restrictions on all the players so the team can become more than the sum of its parts. If we added ethics to the team, this would mean giving all the players more say on what they think and feel their role and activity level should be. If this distinction is blurred, the team can get worse or the players can get discontent. In America, the team divides into factions and discontent due to the blur. If you look at any of the ten commandments, there were all designed to benefit the team. Even the first; one God, preempts discussions that could cause the team to separate into factions and stop cooperating. Ethics, on the other hand, is more about the needs of the individual. Ethically, we would try to find a way that allows people to believe what they wish, respect to a God or not. But if a balance is exceeded, the can team can suffer. In these discussion forums atheist and theists often undermine each other. The forum team will break down due to ethics; everyone does their own thing. The team affect behind morality is rational and not random. A coach can't just randomly put any player anywhere and still ago to the championship. He needs to carefully create a logical order based no situational needs. Ethics is more irrational since the needs of any given individual can have no logical rhyme or reason. It is often based on this feeling right for me. Too much ethics, will add too much irrationality to the reasoning of team morality, thereby breaking down the team; degenerating reason to emotion. The computer simulated the impact of Hollywood liberal propaganda on the mind. Liberals tend to be more emotional and angry due to the void created by their propaganda, which blurs the line between ethics and morality. Edited November 4, 2015 by puppypower
Strange Posted November 4, 2015 Posted November 4, 2015 (edited) Morality is based on the needs of the team, while ethics is based on the needs of the individual. This is an "unusual" definition. It is more common to morals as what defines right and wrong, while ethics is about the principles that should be applied in life (perhaps based on the moral rules). For example, a moral rule might be: it is wrong to lie. A corresponding ethical guideline could be: don't lecture people about things you don't know much about, and don't present your opinions as fact. http://grammarist.com/usage/ethics-morals/ http://www.diffen.com/difference/Ethics_vs_Morals http://www.ianwelsh.net/ethics-101-the-difference-between-ethics-and-morals/ http://www.philosophersbeard.org/2010/10/morality-vs-ethics.html This makes the rest of your post pretty much irrelevant. Edited November 4, 2015 by Strange
MonDie Posted November 14, 2015 Posted November 14, 2015 Mulling over these AIs that beat champions in chess or jeopardy, I had a stunning thought. Although I don't have historical evidence for it, I could provide both philosophical and psychological evidence that an ethic of inquiry could promote peace, and it's easily demonstrated that this ethic must precede all others. What if somebody created a chatterbot that, borrowing from all the great philosophers, attempted to engage people in philosophical discussion of their ethical beliefs? Even in countries with restricted WWW access, people often subvert these restrictions with third party sites. Could this ethics AI eliminate dogmatism worldwide? Have you met a human? Exactly! Most importantly, ethics bot would sympathise with social emotions, but would herself lack any form of group loyalty or bias. Research shows that even sociopaths have morals, albeit limited. https://www.google.com/url?sa=t&rct=j&q=are+all+forms+of+morality+compromised+in+psychopathy&source=web&cd=1&ved=0CB4QFjAAahUKEwjXlNvo6pDJAhVHl4gKHfdYACo&url=http%3A%2F%2Fwww-bcf.usc.edu%2F~jessegra%2Fpapers%2FGIGKH.2009.Psychopathy.JPD.pdf&usg=AFQjCNG_xVFLisgatDPwX8TfKtC9r98BXw Even obedience to authority is vulnerable, for one must reason about which authorities are credible.
ThinkingMind Posted November 14, 2015 Posted November 14, 2015 http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/ For some reason, a part of this conversation seems like it would come from the Hitchhiker's Guide to the Galaxy series. I actually feel bad for this robot even though I doubt it has any feelings.
arclite Posted November 17, 2015 Posted November 17, 2015 The question of morality supplies a necessary guideline to a logical argument between A & B where A has liberty to act and B is defined by the actions of A. How then can A act without causing harm or altering the well being of B? This is the field of morality. It could be described as an arena except this implies competition where morality declares an essential equality throughout. A perspective of "an approach to", "a consideration with" and "a result of". The absence of any discernible aspect of B through sensory apparatus available to A does not mean absence of actuality of B. Absence of understanding of the potential reality of A by A does not mean the absence of potential for B. Ultimately morality provides a developmental safeguard for A in its being presented to a situation that assume A must define B when if fact A is defined by B. There is a lot of deep historical discussion on the subject of "the Duality of existence" within which we try to define reality either as individuals or as a social group. In the West we look to political leaders to define our collective responsibilities while we are extremely busy working to pay taxes before we can pay for personal interests. The question arises as to whether we have any time left to voice the need to redefine any aspect of our social reality. This is a question of morality to prevent errors of judgement and actual harm to vulnerable individuals within society. Our observations are critical to what is happening and should be a continuous part of the process of social engineering but currently this is not so. Why (defined by the opening topic statement of "Tell me the definition of morality") ?
Recommended Posts