JohnF Posted April 24, 2007 Share Posted April 24, 2007 Robot rights seem to have appeared in the news lately... Robotic age poses ethical dilemma http://news.bbc.co.uk/1/hi/technology/6425927.stm Robots could demand legal rights http://news.bbc.co.uk/1/hi/technology/6200005.stm Do you think robots should one day be given rights? And is there a need for something like Asimov's Three Laws of Robotics? Robot future poses hard questions http://news.bbc.co.uk/1/hi/technology/6583893.stm For those that don't know them, they are... 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Giving rights to robots is clearly going to be an emotive subject for the world. Businesses may want such rights to be very limited so they can make use of robots in much the same way as they make use of non-autonomous machinery. Individuals may become attached to their household robot and consequently anthropomorphise them, resulting in a belief that some robots were being treated as slave labour. Asimov tackled this situation in his book "Bicentennial Man" and the film didn't do too bad a job of it either. To gain rights Andrew, the robot in the story, had to arrange to die so as to be given human rights. I think we are probably a very long way from having to consider human rights for a robot but rights of some sort may be on the agenda sooner than we think. As for the "Three Laws"; this is a mine field. Not because the laws are not good but because of the difficulty we will have just defining things like human, harm, existence, etc. Do we include psychological harm and can we mean existence of the robots current memory or do we mean its physical structure too? When does a human start and end? Are dead humans still human and if not when does death occur? Link to comment Share on other sites More sharing options...
Spyman Posted April 25, 2007 Share Posted April 25, 2007 Do you think robots should one day be given rights? If they become conscious are they still "robots" or a nonhuman person in a artificial body ? Would another intelligent lifeform, as humans, if found on Earth be given rights ? How about advanced aliens if visiting Earth, should they have rights and give us rights ? If we where able to create life, artificially engineered "humans", would they get rights ? IMHO the degree of Consciousness and Intelligence should mirror the degree of rights and responsibility. Link to comment Share on other sites More sharing options...
JohnF Posted April 25, 2007 Author Share Posted April 25, 2007 IMHO the degree of Consciousness and Intelligence should mirror the degree of rights and responsibility. I would only consider rights if the robot could experience discomfort; either physiological or physiological. I see rights as dealing with this particular issue in the first place. If someone is daft enough to build a mining robot that's scared of the dark then it should be given the right to operate in an illuminated environment. Without feelings or the ability to experience pain then it wouldn't matter how a robot was treated. Even if it was self aware, enforcing an improvement in its working conditions would have no benefit; the robot experience wouldn't change. Link to comment Share on other sites More sharing options...
Spyman Posted April 25, 2007 Share Posted April 25, 2007 I think the ability to have feelings is included in Consciousness. Without Consciousness a robot would be like any other piece of machinery. (and my dishwasher don't need nor deserves any rights) Self-aware would at least mean to be able to reflect on the relationship with it's surroundings. If the mining robots reflections about the darkness is of greater danger and it has a Will to survive, I would call it scared. If it only calculates the risks without second thoughts, without personal Will, it would only be a machine, self-aware or not. Link to comment Share on other sites More sharing options...
JohnF Posted April 25, 2007 Author Share Posted April 25, 2007 I think the ability to have feelings is included in Consciousness. What about intelligence. Do you think consciousness and intelligence can exist without each other? Link to comment Share on other sites More sharing options...
insane_alien Posted April 25, 2007 Share Posted April 25, 2007 yes. we already have 'weak' AI. programs that are aware of the surroundings and apply a simple set of rules that are influenced by variables in the environment. i wouldn't call the machines conscious but they do have a limited intelligence. Link to comment Share on other sites More sharing options...
JohnF Posted April 25, 2007 Author Share Posted April 25, 2007 yes. we already have 'weak' AI. programs that are aware of the surroundings and apply a simple set of rules that are influenced by variables in the environment. i wouldn't call the machines conscious but they do have a limited intelligence. So that's saying that perhaps we can have intelligence without consciousness but could we have consciousness without intelligence? Do animals have consciousness? Do they have intelligence? I have been considering the idea that intelligence and consciousness, as experienced by humans, requires the ability to conceptualise time and place events in another time frame. I have been unable to think of an example that shows animals to be no more than event driven, in a similar way that current efforts in AI are. Like AI, animals have a number of programmed responses to events and can adapt those programs to produce better results. But unlike humans there seems to be no evidence of an ability to forecast the outcome of events where there is no previous experience. Link to comment Share on other sites More sharing options...
insane_alien Posted April 25, 2007 Share Posted April 25, 2007 i'd say animals are both intelligent and conscious. perhaps consciousness is a product of intelligence? i don't know. probably never will. Link to comment Share on other sites More sharing options...
Spyman Posted April 26, 2007 Share Posted April 26, 2007 Do you think consciousness and intelligence can exist without each other? Well, both Consciousness and Intelligence are hard to definately define, and both can be of different degrees, at least partly independent of each other. Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. http://en.wikipedia.org/wiki/Consciousness Intelligence is a property of mind that encompasses many related mental abilities, such as the capacities to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn. Although intelligence is sometimes viewed quite broadly, psychologists typically regard the trait as distinct from creativity, personality, character, knowledge, or wisdom. http://en.wikipedia.org/wiki/Intelligence Can humans have different levels of Consciousness and/or Intelligence ? As insane_alien says we already have machines with Artificial Intelligence and some people argue that computers are consious. But how do you measure if they have consciousness and at which level relative average humans ? Personally, I would not call them conscious, but I am willing to accept a very very tiny level. The other way around, I don't think you can "perceive the relationship between oneself and one's environment" without "the capacities to reason", but that doesn't mean that you couldn't have a very conscious creature with almost zero intelligence. Do animals have consciousness? Do they have intelligence? "Animal" has a very broad definition and biologically humans are included. Without humans, as I guess you meant, still leaves plenty of different spiecies. Both insects and mammals are animals but they are quite different. Does Ants have Consciousness and Intelligence ? Clearly they have little intelligence, since they can solve simpler problems, but I don't think they are consius, they are more like automats calculating the risks for efficiency, without any personal desire. (But as said above I can accept a very very tiny level.) Does Dogs have Consciousness and Intelligence ? With Dogs and mammals in general, my opinions is that we have moved from the extreme areas close to zero and are confident that they have both, but to lesser degree than humans. Is it possible to have personality, dreams and desire or fear without Consciousness ? My dogs have dreams with both desire and fear in them. During sleep they can dream and by solely watching their movements and listening to the sounds they make, I can imagine what they are dreaming about: running for joy, chasing something, being chased, fighting and so on... When awake they show more than intelligence behavior, they have personality, feelings and can clearly sence my feelings, if I am happy they want to join, if I am sad they try to comfort, if I am angry they avoid me and they are even able to understand some jokes. Link to comment Share on other sites More sharing options...
Spyman Posted May 7, 2007 Share Posted May 7, 2007 US researchers have simulated half a virtual mouse brain on a supercomputer. In other smaller simulations the researchers say they have seen characteristics of thought patterns observed in real mouse brains. Now the team is tuning the simulation to make it run faster and to make it more like a real mouse brain. http://news.bbc.co.uk/2/hi/technology/6600965.stm If Moore's law holds and they manage to run a simulation of a whole brain with realtime, in a couple of years... Will the simulation be intelligent, conscious, maybe considered alive ? Link to comment Share on other sites More sharing options...
insane_alien Posted May 7, 2007 Share Posted May 7, 2007 i would imagine that unless they manage to load a brain state with the same pattern as a living mouse then it will be brain dead. if they do that then the simulated brain would likely go insane from the lack of sensory input. Link to comment Share on other sites More sharing options...
JohnF Posted May 7, 2007 Author Share Posted May 7, 2007 If they get a whole mouse brain simulated and manage to add a voice synthesiser, just pray the first answer from it isn't 42 Link to comment Share on other sites More sharing options...
Spyman Posted May 8, 2007 Share Posted May 8, 2007 insane_alien, that was a sad and boring point of view. I think "they have seen characteristics of thought patterns" at least indicates that something is going on, there is no reason for a twice so large simulation to become "brain dead" or "insane". (Sensory input/output to/from a simulation can be virtual too.) JohnF, LOL ! But after the first chock and several years of hard work analyzing all the data from the simulation, the scientists finally find out that it was a practical joke done by a technician during his last night shift. How about a more serious approach, what are your toughts and reflections IF they pull it off ? I remember reading an article, some ten years ago, that they lacked the computer power to simulate a fly brain. It's not impossible to imagine a simulation of a human brain in another ten years. Would it be a person, should it have rights ? Link to comment Share on other sites More sharing options...
JohnF Posted May 8, 2007 Author Share Posted May 8, 2007 It's difficult to imagine what the result will be if they do get a full mouse brain working. Unless they can simulate some I/O for it they may get some very misleading data. I wonder too if the simulation runs at the same speed as a mouse brain. I suppose it all depends on what they expect and what they are looking for. Link to comment Share on other sites More sharing options...
Spyman Posted May 8, 2007 Share Posted May 8, 2007 The vast complexity of the simulation meant that it was only run for 10 seconds at a speed ten times slower than real life - the equivalent of one second in a real mouse brain. For future tests the team aims to speed up the simulation, make it more neurobiologically faithful, add structures seen in real mouse brains and make the responses of neurons and synapses more detailed. (From the Link I gave in post #10.) Link to comment Share on other sites More sharing options...
bascule Posted May 8, 2007 Share Posted May 8, 2007 I think machines can be people. They aren't yet. When machines become people they should get the same rights as people. Link to comment Share on other sites More sharing options...
Spyman Posted May 9, 2007 Share Posted May 9, 2007 But when machines become mice should they get the same rights as mice ? (Not much, but more than a simulation in a BlueGene L supercomputer has.) Link to comment Share on other sites More sharing options...
ecoli Posted May 9, 2007 Share Posted May 9, 2007 But when machines become mice should they get the same rights as mice ?(Not much, but more than a simulation in a BlueGene L supercomputer has.) Most of the rights mice get (from people) are anti-cruelty laws. But the computer doesn't really have a spinal cord... Anyway, we're still talking about simulations here. Will simulations ever be = real? Link to comment Share on other sites More sharing options...
Spyman Posted May 11, 2007 Share Posted May 11, 2007 Most of the rights mice get (from people) are anti-cruelty laws. But the computer doesn't really have a spinal cord... The main function of the spinal cord is transmission of neural inputs between the periphery and the brain. http://en.wikipedia.org/wiki/Spinal_cord I think computers have a substitute for a spinal cord, like the keyboard is connected by a interface with the CPU. If your spinal cord was replaced with a mechanical/electrical device with the same function/performace would you not be able to feel pain ? If your spinal cord was induced with a simulated pain, does that make the feel of pain less sensitive ? If the spinal cord is simulated instead of real, does that make the feel of pain less important to you ? Lets take the example JohnF gave in post #3: "If someone is daft enough to build a mining robot that's scared of the dark then it should be given the right to operate in an illuminated environment." If our technical abilities and knowledge advances to such a level that a mouse brain computer is small and cheap, then It could be very useful in a mining robot and It could be more efficient with the ability to feel pain. I strongly suspect the Mining company would prefer if It didn't have rights suchs as anti-cruelty laws... Anyway, we're still talking about simulations here. Will simulations ever be = real? How "real" are the thoughts in your brain ? Are computer programs not "real" ? If the autopilot in aircrafts are not "real", how does airplanes stay in the air ? (When humans isn't piloting.) Don't you think a simulation would be able to navigate a plane ? A University of Florida scientist has grown a living "brain" that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network. The "brain" -- a collection of 25,000 living neurons, or nerve cells, taken from a rat's brain and cultured inside a glass dish -- gives scientists a unique real-time window into the brain at the cellular level. As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments. http://www.sciencedaily.com/releases/2004/10/041022104658.htm I don't think there is any difference if the airplane is guided by a software simulation or an biologically created computer of living cells. Thus since their actions are real, the reasons responsible must be also. Simulations of half a mouse brain has already been done, and the human brain is next: The cerebral cortex, the convoluted "grey matter" that makes up 80% of the human brain, is responsible for our ability to remember, think, reflect, empathize, communicate, adapt to new situations and plan for the future. The cortex first appeared in mammals, and it has a fundamentally simple repetitive structure that is the same across all mammalian species. The brain is populated with billions of neurons, each connected to thousands of its neighbors by dendrites and axons, a kind of biological "wiring". The brain processes information by sending electrical signals from neuron to neuron along these wires. In the cortex, neurons are organized into basic functional units, cylindrical volumes 0.5 mm wide by 2 mm high, each containing about 10,000 neurons that are connected in an intricate but consistent way. These units operate much like microcircuits in a computer. This microcircuit, known as the neocortical column (NCC), is repeated millions of times across the cortex. The difference between the brain of a mouse and the brain of a human is basically just volume - humans have many more neocortical columns and thus neurons than mice. The Blue Brain Project is an attempt to reverse engineer the brain, to explore how it functions and to serve as a tool for neuroscientists and medical researchers. It is not an attempt to create a brain. It is not an artificial intelligence project. Although we may one day acheive insights into the basic nature of intelligence and consciousness using this tool, the Blue Brain itself is simply a representation of a biological system and thus would never be considered conscious itself. http://bluebrain.epfl.ch/ As I see it, there is no theoretical limits against making an simulation as complex as a human brain, we will sometime in the future build computers with enough capabilities. The question is 1) if the simulations eventually could be conscious and 2) if they should have rights ? I agree with bascule that machines can be consious and persons, but before we create a human-level being we will probably make a mice-level being, then a dog-level, a lower primate-level and so on, up to humans and maybe even higher. (With that said I don't think the BlueGene L supercomputer or BlueBrain simulation is consious.) But when machines become mice should they get the same rights as mice ?(Not much' date=' but more than a simulation in a BlueGene L supercomputer has.)[/quote'] So more precisely, my question you quoted was if/when they are conscious, is there a level they must surpass before they should have rights ? bascule gave an easy answer, if machines become people, (human-level), they would likely force us to give them rights eventually, whether we like it or not, but machines that are a low-level can probably be hold in slavery and misery forever. Since a lot of people don't consider "animals" to be consious, it's not likely they would accept a low-level simulation to be either. Link to comment Share on other sites More sharing options...
Spyman Posted May 11, 2007 Share Posted May 11, 2007 Ooops ! Double Post. This one now cleared and can be deleted. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now