WarmakerT Posted May 22, 2013 Posted May 22, 2013 What if we could recreate the brain in a supercomputer? A brain that can has feelings but is trapped, can't move, can't touch, can't smell, can't see. He's a slave, a model for our projects. A brain that would just be turned on and off when it's needed. Scientists probably wouldn't care about this, but it's a real problem. Should there be a law against this before this is even finished?
Sato Posted May 22, 2013 Posted May 22, 2013 No, it would likely not be a replica of a human brain, and it would never know those senses therefore not feeling any sorrow for not having them. Albeit I'd assume a large-scale AI system like that would implement image and audio recognition as well, but there's no need to worry about it feeling lonely as it wouldn't be conditioned to be used to being around others. 1
EdEarl Posted May 22, 2013 Posted May 22, 2013 I am not an expert, but have studied artificial intelligence (AI) and artificial general intelligence (AGI). I think your concerns are valid, but the state of the art is far from being able to realize a system that would fit your description. Moreover, I believe scientists are also concerned about similar issues. In my opinion making emotions in a computer will be difficult and not relevant to making AGI help with science. There already exist hundreds or thousands of AI systems that analyze handwriting, read printed documents, translate speech into text, play chess, play Jeopardy, etc. These systems do what they are designed to do, and do not exhibit any spontaneous emotion, and there are good reasons why they cannot. Our emotions are associated with hormones and neurotransmitters such as dopamine, noradrenaline, serotonin, oxytocin, cortisol and GABA. Computers do not have such chemicals. It might be possible to simulate the effects of those chemicals, but that would take extra computer power, which would reduce the effectiveness of AI programs. Thus, adding emotions to AI programs would be difficult and unwise. It is hard enough to code AI programs already without adding emotions to the coding burden. 2
SamBridge Posted May 22, 2013 Posted May 22, 2013 (edited) There likely would be ethical issues brought up with reproducing a fully mechanized version of the human brain as there is with human cloning, I can't say it's impossible it wouldn't be able to analyze itself in a conscious seeming way, but as said before it wouldn't have "emotions", it would remain neutral about just about everything that happened to it just as a graphing calculator does. I guess though it doesn't seem particularly ethical to create a possibly conscious thing for one's own personal use even if it is for science. Edited May 22, 2013 by SamBridge 1
EdEarl Posted September 7, 2013 Posted September 7, 2013 (edited) BOINC Manager connects volunteer computers into a computing cloud that can be used by various projects, including SETI and MindModeling@Beta, with the following purpose: MindModeling@Home (Beta) is a research project that uses volunteer computing for the advancement of cognitive science. The research focuses on utilizing computational cognitive process modeling to better understand the human mind. We need your help to improve on the scientific foundations that explain the mechanisms and processes that enable and moderate human performance and learning. Please join us in our efforts! MindModeling@home is not for profit. I think the mind modelling effort will succeed in creating AGI, with artificial emotions. The current mind model is incomplete, and a complete model is not near. There are already groups discussing the ethics and moral issues, as well as safety considerations, for example, The Institute for the Future. Edited September 7, 2013 by EdEarl
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now