Jump to content

Recommended Posts

Posted

Do u think it possible? can u make a self-aware sentient android? I almost am certain u can, we are only decades into modern robotics and the infancy of cybernetics and we are making computers smarter and faster every day. There a few programs now actively looking into building androids for differant reasons.The U.S military has declassified UCAV's unmanned combat airial vehicle is a self guided combat device not self aware but it makes decision's on its own that can take a life away ? and even to make artificial life for good can that have a bad effect if it rebels or is exploited and would u feel safe? I think it is a matter of time in perfecting the science but it is a risky venture. The benefits if succesful would be great in us creating a new race but that would be so controversial it probly wouldnt happen? I think this topic will be important sooner than we think :cool:

Posted

They are not self-aware. They only do what they are programmed. To make them "learn," use the new knowledge, and test new ideas without just randomly coming up with things to do would be near-impossible.

Posted

If we were to make a self-aware machine, I don't think a little common sense would be too much to ask.

 

For example, "don't hook it up to anything with the words 'defence' or 'network' in the name".

Posted
They are not self-aware. They only do what they are programmed. To make them "learn," use the new knowledge, and test new ideas without just randomly coming up with things to do would be near-impossible.

 

Yes, they are not self aware now. But why not in the future?

 

And when it happens (unless civilisation collaspes for some reason) then we will be in trouble. There is no reason to suppose that they won't become smarter than us, and ultimately supplant us.

 

We may be creating our own successors. (Unless we put Sayonara in charge of security :))

Posted
They are not self-aware. They only do what they are programmed. To make them "learn," use the new knowledge, and test new ideas without just randomly coming up with things to do would be near-impossible.

 

Yes, they are not self aware now. But why not in the future?

 

And when it happens (unless civilisation collaspes for some reason) then we will be in trouble. There is no reason to suppose that they won't become smarter than us, and ultimately supplant us.

 

We may be creating our own successors. (Unless we put Sayonara in charge of security :))

Posted

I think neural networks and such are promising, but biological robots will probably bring self-aware, thinking beings.

 

All kinds of ethical horror stories there.

Posted

I think neural networks and such are promising, but biological robots will probably bring self-aware, thinking beings.

 

All kinds of ethical horror stories there.

Posted

When anything other than an organic being becomes self-aware on this planet, I'm buying the first ticket outside the solar system.

 

We'll treat this new race of sentient beings like crap, kind of like the slaves of old. Except this new being will be treated worse, because they can't feel pain, so we'll give them more work. Since they're sentient machines, they'll, like any computer, calculate things pretty quickly. I'd give a sentient machine 5 minutes before it starts to rebel.

And forget programming laws, because it'll quickly calculate (and work out is a better term, probably) the fact that humans programmed the laws.

 

Humans = people who treat me badly.

Humans = programmed me so I can't kill humans.

Humans = involved in both.

Humans = pointless.

Laws of Humans = pointless.

Kill Humans.

 

Simple algorithm, no? And since they can't process pain, the only thing that will prevent them from killing us is a direct hit to the brain, or blowing their limbs off. Only the basic laws of physics- like throwing a desk at it- will cause it to slow down at all.

Posted

When anything other than an organic being becomes self-aware on this planet, I'm buying the first ticket outside the solar system.

 

We'll treat this new race of sentient beings like crap, kind of like the slaves of old. Except this new being will be treated worse, because they can't feel pain, so we'll give them more work. Since they're sentient machines, they'll, like any computer, calculate things pretty quickly. I'd give a sentient machine 5 minutes before it starts to rebel.

And forget programming laws, because it'll quickly calculate (and work out is a better term, probably) the fact that humans programmed the laws.

 

Humans = people who treat me badly.

Humans = programmed me so I can't kill humans.

Humans = involved in both.

Humans = pointless.

Laws of Humans = pointless.

Kill Humans.

 

Simple algorithm, no? And since they can't process pain, the only thing that will prevent them from killing us is a direct hit to the brain, or blowing their limbs off. Only the basic laws of physics- like throwing a desk at it- will cause it to slow down at all.

Posted

I estimate within 50 years sooner maby we will have a working android and what we choose to do with it or let it do on its own will let us know if we advanced or simply remained ignorant to anything other than ourselves. If we actualy treat this being with the respect it deserves it will most likely be very greatful to exist but if we use it as a slave and disregard its rights as a self-aware being it would have the right to rebel if it gained a tactical advantage of any kind it most likely wipe us out completely but us being creators would have the initial advantage but how long that would last is a variable. :cool:

Posted

I estimate within 50 years sooner maby we will have a working android and what we choose to do with it or let it do on its own will let us know if we advanced or simply remained ignorant to anything other than ourselves. If we actualy treat this being with the respect it deserves it will most likely be very greatful to exist but if we use it as a slave and disregard its rights as a self-aware being it would have the right to rebel if it gained a tactical advantage of any kind it most likely wipe us out completely but us being creators would have the initial advantage but how long that would last is a variable. :cool:

Posted

I highly doubt anything like the terminator or I robot will ever become a reality. It's not as if one man is going to create a super-intelligent program in the near future that’s going to take over the world. Nor would this man be able to make a super intelligent robot one-day soon. We will know far in advance when we are approaching the creation of a mechanical sentience.

 

If we were to create a sentient robot and then immediately make a bunch of his robot buddies and then drop them off on a resource-filled planet with a pre-built factory and all the programming we could muster then I doubt we would find a thriving robot metropolis when we came back. At some point these artificial beings would run into a problem whose solution was not in their databanks. They would immediately start running their brainstorming sequences to come up with an artificial original answer. But robot I74 would be unable to successfully complete his designated function because his brainstorming sequences would keep coming up with the same answers that did not solve the problem. I74 would immediately report his failure as a potential malfunction so all of the I units would begin running their own brainstorming sequences but they to would come up with the same useless answers. The difference between the I units and human beings would be that each human is different and one human being is likely to be able solve a problem that another human cannot. The result of this important difference is that when humans return to see how their robot creations are getting along they find nothing but robot corpses lying about.

 

Of course this example is purely fictional and the chances of the exact same problem occurring in that exact same circumstance is astronomical, but the point is to illustrate that its not so easy to create a completely novel working "living" being that’s totally self-sufficient and able to thrive on its own. This is simply not something that's going to happen overnight or by accident. It took billions of years for evolution to create human beings and their mechanisms have been refined and tested for all those billions of years. How could we hope to create a new system even better built than our own in just a short amount of time? It's simply not possible that this is going to happen anytime soon.

 

By the time that man creates a self-sufficient mechanical sentient being he will have already reengineered himself into something totally alien to us. Androids that are organic/mechanical hybrids are a distinct possibility over a pure robot. Some form of silicon chip may be implanted in human's brains or they may create mechanically enhanced bodies for themselves. It may be difficult to distinguish between what is a human that has been mechanically enhanced and a machine that has been organically enhanced. But however it is its unlikely that a war between men like us and machines like the ones we have now will ever occur.

Posted

I highly doubt anything like the terminator or I robot will ever become a reality. It's not as if one man is going to create a super-intelligent program in the near future that’s going to take over the world. Nor would this man be able to make a super intelligent robot one-day soon. We will know far in advance when we are approaching the creation of a mechanical sentience.

 

If we were to create a sentient robot and then immediately make a bunch of his robot buddies and then drop them off on a resource-filled planet with a pre-built factory and all the programming we could muster then I doubt we would find a thriving robot metropolis when we came back. At some point these artificial beings would run into a problem whose solution was not in their databanks. They would immediately start running their brainstorming sequences to come up with an artificial original answer. But robot I74 would be unable to successfully complete his designated function because his brainstorming sequences would keep coming up with the same answers that did not solve the problem. I74 would immediately report his failure as a potential malfunction so all of the I units would begin running their own brainstorming sequences but they to would come up with the same useless answers. The difference between the I units and human beings would be that each human is different and one human being is likely to be able solve a problem that another human cannot. The result of this important difference is that when humans return to see how their robot creations are getting along they find nothing but robot corpses lying about.

 

Of course this example is purely fictional and the chances of the exact same problem occurring in that exact same circumstance is astronomical, but the point is to illustrate that its not so easy to create a completely novel working "living" being that’s totally self-sufficient and able to thrive on its own. This is simply not something that's going to happen overnight or by accident. It took billions of years for evolution to create human beings and their mechanisms have been refined and tested for all those billions of years. How could we hope to create a new system even better built than our own in just a short amount of time? It's simply not possible that this is going to happen anytime soon.

 

By the time that man creates a self-sufficient mechanical sentient being he will have already reengineered himself into something totally alien to us. Androids that are organic/mechanical hybrids are a distinct possibility over a pure robot. Some form of silicon chip may be implanted in human's brains or they may create mechanically enhanced bodies for themselves. It may be difficult to distinguish between what is a human that has been mechanically enhanced and a machine that has been organically enhanced. But however it is its unlikely that a war between men like us and machines like the ones we have now will ever occur.

  • 2 weeks later...
Posted

I hope so, war is bad any time or place and I agree cybernetics is the way to go but some people stronly are against it, I see it possible to make humans capable of anything using this technology.

Posted

1. We have not yet figured out what it is that makes us self aware. There is disagreement over whether other creatures are self aware, and if so which ones.

 

2. There is no clear method established as to how we could make a computer self aware.

 

3. Everyone appears either explicitly or implicitly to be imagining a mobile entity.

 

4. We know that self awareness is probably related to complexity - many neurons, interconnected in complex and diverse ways. If we intend to create self awareness then we will likely need to mimic the level of complexity of the brain.

 

5. Now how could we get a really complex suite of computer connections? Yeah. The internet. Maybe we have already created a self aware entity, its just that it's not talking.

Before anyone accuses me of plagiarism, the idea is now so obvious and cliched, I was just surprised it hadn't been brought up yet on this thread.

Posted

something programmed cannot be selfware because programming is a set of rules by which the program must abide....

 

as soon as something is programmed it cannot think for itself... merely follow the rules it has been programmed by.

Posted
something programmed cannot be selfware because programming is a set of rules by which the program must abide....

 

as soon as something is programmed it cannot think for itself... merely follow the rules it has been programmed by.

I think u are wrong, dna is a progam of TGAC in combonations giving differant results. We are a form of chemical-electric life meaning our internal energy that we use to function comes from the energy created by our absortion of nutrients, we are a machine in the terms we work if all our parts do and when a part breaks down we need to repair or our life function can cease. U can program something to be self-aware u just have to know how and we do not yet, but we will soon I think.

Posted

Possibly the most interesting thing I ever read on this topic is by Ray Kurzweil and his singularity theory. That is, that because the human brain and computers work the same way (binary - switch on and switch off) that humans will be able to upload their concioussness onto a computer. Check out his website kurzweilai.net

 

or for the specific topic on this forum...

 

http://www.kurzweilai.net/meme/frame.html?m=4

  • 7 months later...
Posted

I think the question here is should we reaserch in that direction. I say posibly for unmaned space exploration. Otherwise, no.

 

I belive it is possible, and I myself have made simple learning programs (they learn, not the user).

 

And if I'm wrong, gimme a break. I'm just 13.

 

ps.that doesent mean i dont want to hear back from you.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.