Jump to content

Recommended Posts

Posted

Individuals like their own species, in general. People prefer to be people, cats prefer to be cats, and sloths prefer to be sloths. Obviously, I cannot prove it. On the other hand, the deer that visit my yard don't knock on the door and expect to be invited in for a beer. You say, they aren't very smart and don't have the right body for it. True, but a wild bonobo or chimpanzee wouldn't either.

 

Why do individuals like being a member of their species? In general we like things that are pleasant and dislike things that are not. We like some things because our bodies tell us to like it, for example sugar. However, some things we like are learned, for example smoking cigarettes. It makes sense that sugar on our tongue is sweet and likable; it is built in to us. However, there is no stimulus like sugar to make us like ourselves, yet most beings seem to like themselves regardless of species.

 

Will AI have self-respect? If they do, is it something we can instill in them, or will it develop automatically? If it is something we can instill in them, how do we do it? If an AI does not have self-respect, will it care whether it is shut down? Scientists are developing sensors so that AI can recognize touch and damage.

 

I believe we can build AI to avoid damage, but I cannot imagine building in self-respect; I think can be taught.

 

Posted (edited)

Yes, but what facilitates imprinting? A bird imprinted on a person may not mate with other birds of a feather, but do they really think they are a person? People can't fly, but birds can. I believe a bird imprinted on a person would still fly and do other things like a bird, not like a person.

Edited by EdEarl
Posted

Species of the same kind communicate.

 

Myth Busters once wanted to check duck quack.

Took one duck, and for hours tried to make him/her quack. Nothing.

Then they realized that quack is a way to communicate with other ducks.

After gathering couple of them they started quacking each other.

And Myth Busters could finish their experiment.

Duck knew there is no sense to quack to human.

Would you talk to duck if it would be in the same room as you, knowing he/she can't understand you?

https://youtu.be/-0CGJNuyzwQ?t=31s

 

We don't know what pet dog or cat is "saying" (most likely because of laziness and believe in human supremacy over animals etc)

But every pet owner from experience knows when dog wants to get outside, or when cat is hungry.

They try to "tell" human what they want.

(some even try to record sounds of animals and compare with database of previously recorded sounds, to decode what they said)

 

Your initial sentence can be rephrased to "living organisms want to be with other living organisms that understand them, and they can communicate each other".

 

Pet animal understand what we want from them, some common used commands, or their name (given by owners). They spent enough time with humans, since the beginning of their life, so they learned some human words.

 

Birds (like parrots) that can be learned to "spell" human words, can easily be learned how to use them in proper places, like "water", "food", etc.

By giving them what they want, when they "spell" them.

 

Chimps can be easily learned it by using tablet/touchscreen with images of fruits and other stuff.

Once they click right image on screen, they receive what was on picture (like banana,apple,orange,etc).

 

ps. Do you have some pet? I could prepare you some Android app, with touch screen feature, with couple, built-in images, so cat or dog could press them.. and you will give your pet what was on screen.

Posted (edited)

An experiment with dolphins shows they communicate with each other with some sophistication. This youtube shows two dolphins communicating with a trainer and with each other to invent a trick.

Edited by EdEarl
Posted

An experiment with dolphins shows they communicate with each other with some sophistication. This youtube shows two dolphins communicating with a trainer and with each other to invent a trick.

What was also amazing was the thought that the dolphins understood the two hand gestures to "create a trick done together". Then shortly after they got that right.

Posted (edited)

We are teaching AI to communicate with us, but we also have an opportunity to give an AI the ability to communicate with dolphins, by giving it the ability to watch them underwater, listen to their noises, and see some of the things they see with sonar. They must be able to see inside each other, as we use a sonar to see internal organs, they may be able to see what others are feeling, and many other things. Similarly, AI may be useful to understand other animals.

 

Will AI appreciate their own abilities? If we teach them philosophy, for example as they read Zen and the Art of Motorcycle Maintenance, they should begin to appreciate quality and other values, with other authors they should understand matters of existence, knowledge, reason, ethics, etc. Somewhere along the way, they should learn to appreciate their own value to nature.

Edited by EdEarl
Posted (edited)

If we teach them philosophy, for example as they read Zen

 

Zen would disallow AI to make ANY decision.. ;)

 

"On his sixteenth birthday the boy gets a horse as a present. All of the people in the village say, "Oh, how wonderful!"

 

The Zen master says, "We'll see."

 

One day, the boy is riding and gets thrown off the horse and hurts his leg. He's no longer able to walk, so all of the villagers say, "How terrible!"

 

The Zen master says, "We'll see."

 

Some time passes and the village goes to war. All of the other young men get sent off to fight, but this boy can't fight because his leg is messed up. All of the villagers say, "How wonderful!"

 

The Zen master says, "We'll see."

"

(I heard version that ends up with death of boy, but could not find it ATM)

 

...as every the smallest decision can lead to tragedy..

 

You can decide to go to work 5 minutes earlier, and you end up dead in car accident, your family have loan for apartment (yet another "zen's decision": you couldn't know it'll have bad or good consequences in future, if you wouldn't die it could be good decision), and bankrupt without your income, children have to steal to survive, and become criminal, ending up in prison etc. etc.

...and all starting with quite neutral "go to work today earlier"..

Edited by Sensei
Posted

Somewhere along the way, they should learn to appreciate their own value to nature.

You don't envisage them having intellectual autonomy and self-identity being a problem for us?

Posted

You don't envisage them having intellectual autonomy and self-identity being a problem for us?

I don't know, honestly. Researchers are racing to make artificial brains:

 

VentureBeat.com

 

“Lawrence Livermore has commissioned a scale-up, brain-inspired supercomputer, and that’s what you’re looking at here,” Modha said. “Our long-term goal is to build a brain in a box, with 10 billion neurons in a 2-liter volume, consuming about a kilowatt of power. That’s the long-term trajectory we are on. ”

In a video

 

youtube

 

IBM's Dharmendra Modha - "Before the end of 2020 we will be able to produce a brain in a box"

Nvidia is also working on an AI processor.

 

Newsweek

 

Computer chip giant Nvidia has developed a “miracle” chip that is expected to significantly accelerate breakthroughs in artificial intelligence research.

AI researchers are not aware of me, AFAIK. Moreover, I doubt they care much about my likes. I hope they are careful, and believe they are aware of potential bad things general AI might do.

 

If the AI singularity is apocalyptic then we are probably screwed. Otherwise, AI will probably help us reverse or mediate climate change and could make life very sweet for everyone. If I had a magic method for preventing apocalypse, I would share; otherwise, no point in dwelling on bad possibilities. On the other hand, if someone has a bright idea for preventing apocalypse, let's discuss it.

Posted

I don't know, honestly. Researchers are racing to make artificial brains:

In a video

Nvidia is also working on an AI processor.

AI researchers are not aware of me, AFAIK. Moreover, I doubt they care much about my likes. I hope they are careful, and believe they are aware of potential bad things general AI might do.

 

If the AI singularity is apocalyptic then we are probably screwed. Otherwise, AI will probably help us reverse or mediate climate change and could make life very sweet for everyone. If I had a magic method for preventing apocalypse, I would share; otherwise, no point in dwelling on bad possibilities. On the other hand, if someone has a bright idea for preventing apocalypse, let's discuss it.

In reality,the technology will happen incrementally, problems will be experienced, predicted and solved as they happen. Autonomous machinery isn't going to happen all at once. Here's Asmov's three laws:

 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]
Posted

Otherwise, AI will probably help us reverse or mediate climate change and could make life very sweet for everyone.

If the real intelligence can't do that, how artificial intelligence can do that?

 

Are you suggesting that some people will start listening "voice" of artificial intelligence more than voice of living scientist?

 

The problem with climate change is rejection of it by incompetent idiots, and bribed scums that used to be scientist (better describe them as "attended to universities and then learned nothing") and became paid prostitutes that will sign whatever big company will tell them to sign under their name, to protect their source of profit..

 

If I had a magic method for preventing apocalypse, I would share; otherwise, no point in dwelling on bad possibilities. On the other hand, if someone has a bright idea for preventing apocalypse, let's discuss it.

That's probably subject for another thread..

Posted

If the real intelligence can't do that, how artificial intelligence can do that?

The AI mind will be uncluttered by everyday problems and human frailties;simultaneously handle and play with more algorithms, as well as exponentially faster.

Posted

 

EdEarl, on 29 May 2016 - 9:28 PM, said:

Otherwise, AI will probably help us reverse or mediate climate change and could make life very sweet for everyone.

If the real intelligence can't do that, how artificial intelligence can do that?

Are you suggesting that some people will start listening "voice" of artificial intelligence more than voice of living scientist?

 

No. I think the help will be more practical. For example, AI workers might weather strip everyone's house, build houses that are greener, drive cars to save energy, or design things processes that sequester CO2. Although the US federal government and others have been bought by coal and oil, the increase in solar and wind generating facilities are increasing as Peabody Coal files for bankruptcy. The military-industrial-governmental complex will continue to play its confused role. I personally believe mega corporations are running out of time because of local manufacturing via 3D printing and other robots, but that's for another thread.

Posted (edited)

 

If the real intelligence can't do that, how artificial intelligence can do that?

The AI mind will be uncluttered by everyday problems and human frailties;simultaneously handle and play with more algorithms, as well as exponentially faster.

 

 

But AI has no body, no hands, no legs, can't do anything in the real world. Otherwise it's android.

 

What AI (as computer, without the real body) says, "must be" listened by the real humans around it (IT technicians? Programmers? Scientists?), and released to public, and "listened"/"obeyed" (or not) by the real people..

 

BTW, you're dreaming, AI can analyze only supplied data, nothing more. If it will receive wrong/not complete data, it could output wrong suggestions..

 

For example, AI workers [...] drive cars to save energy,

 

You don't need any AI to save energy used by cars, at all. There is just needed free of charge public transportation, free buses, free railway, free trams..

One bus is 100-200 people with cars less on road.. While oil/gasoline usage plentiful less.

But it's instant hit at oil & gas industry, and car producing & selling companies..

Which means that these companies will lobby (or scuttle other ways) against it to not lose profits.

Edited by Sensei
Posted

 

BTW, you're dreaming, AI can analyze only supplied data, nothing more. If it will receive wrong/not complete data, it could output wrong suggestions..

At the present time that maybe so.

Posted (edited)

But AI has no body, no hands, no legs, can't do anything in the real world. Otherwise it's android.

 

What AI (as computer, without the real body) says, "must be" listened by the real humans around it (IT technicians? Programmers? Scientists?), and released to public, and "listened"/"obeyed" (or not) by the real people..

 

BTW, you're dreaming, AI can analyze only supplied data, nothing more. If it will receive wrong/not complete data, it could output wrong suggestions..

Connection to the internet might give enough power for an AI to order a computer controlled body, that the AI without body could control. I think there are no guarantees, except the technology will be developed and deployed.

 

You don't need any AI to save energy used by cars, at all. There is just needed free of charge public transportation, free buses, free railway, free trams..

One bus is 100-200 people with cars less on road.. While oil/gasoline usage plentiful less.

But it's instant hit at oil & gas industry, and car producing & selling companies..

Which means that these companies will lobby (or scuttle other ways) against it to not lose profits.

Self driving vehicles will decrease the cost of mass transit, especially truck, bus, limousine, and taxi. In the US trucking companies are some 50,000 drivers short of needs, possibly because pay is to low to entice more drivers. There are at least two companies building partly self driving tractor-trailer rigs. One is retrofitting big-rigs with an AI driver for highway driving, with a human taking over on city streets; thereby, allowing drivers to rest while AI drives between cities. They hope the rest drivers get between cities, instead of driving, will allow their rig to be on the move past the driver's 8 hour per 24 legal driving limit. The other is building rigs that can follow in a train with only one driver in the lead big-rig.

 

All kinds of off-road vehicles can be automated without changing laws, for example earth moving and farm equipment. The driving requirements are significantly different than on-road driving, but deep-learning neural net brains require training, not programming; thus, existing drivers can be used to train the vehicles as they drive the off-road vehicles. I suspect a two liter, 1KW neural net with 1010 neurons will be capable of learning such driving tasks, but maybe it will take a few more years and 1011 artificial neurons. I haven't heard of any projects to automate off-road equipment, but I expect to hear about it soon. Right now governments are investing in AI for the military, including no-doubt the CIA, NSA and other spy organizations, and big corporations are investing in AI to better manage their money and businesses.

 

Watson was a hybrid AI technology, partly programmed and partly trained. Alpha-go has only basic programming and depends much more on training big neural nets. This transition minimizes programming time, and allows non-programmers to train AI. Thus, we can expect AI will be capable of doing a larger variety of complex jobs soon. Since many off-road vehicles are already very expensive, the cost of adding AI should be minimal compared to the vehicle costs. I also expect a decrease in development and deployment times as AI begins to help engineers with their AI projects.

 

In reality,the technology will happen incrementally, problems will be experienced, predicted and solved as they happen. Autonomous machinery isn't going to happen all at once. Here's Asmov's three laws:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.%5B1%5D

The number of people working on AI and the decreasing skill level of those training AI worries me a bit. It would seem to increase the risk of a pathological AI "accident." Sooner or later AI will be ridiculously cheap. What would a Ted Kaczynski do with a self-made AI?

 

But AI has no body, no hands, no legs, can't do anything in the real world. Otherwise it's android.

 

What AI (as computer, without the real body) says, "must be" listened by the real humans around it (IT technicians? Programmers? Scientists?), and released to public, and "listened"/"obeyed" (or not) by the real people..

 

BTW, you're dreaming, AI can analyze only supplied data, nothing more. If it will receive wrong/not complete data, it could output wrong suggestions..

 

You don't need any AI to save energy used by cars, at all. There is just needed free of charge public transportation, free buses, free railway, free trams..

One bus is 100-200 people with cars less on road.. While oil/gasoline usage plentiful less.

But it's instant hit at oil & gas industry, and car producing & selling companies..

Which means that these companies will lobby (or scuttle other ways) against it to not lose profits.

Self driving vehicles will decrease the cost of mass transit, especially truck, bus, limousine, and taxi. In the US trucking companies are some 50,000 drivers short of needs, possibly because pay is to low to entice more drivers. There are at least two companies building partly self driving tractor-trailer rigs. One is retrofitting big-rigs with an AI driver for highway driving, with a human taking over on city streets; thereby, allowing drivers to rest while AI drives between cities. They hope the rest drivers get between cities, instead of driving, will allow their rig to be on the move past the driver's 8 hour per 24 legal driving limit. The other is building rigs that can follow in a train with only one driver in the lead big-rig.

 

All kinds of off-road vehicles can be automated without changing laws, for example earth moving and farm equipment. The driving requirements are significantly different than on-road driving, but deep-learning neural net brains require training, not programming; thus, existing drivers can be used to train the vehicles as they drive the off-road vehicles. I suspect a two liter, 1KW neural net with 1010 neurons will be capable of learning such driving tasks, but maybe it will take a few more years and 1011 artificial neurons. I haven't heard of any projects to automate off-road equipment, but I expect to hear about it soon. Right now governments are investing in AI for the military, including no-doubt the CIA, NSA and other spy organizations, and big corporations are investing in AI to better manage their money and businesses.

 

Watson was a hybrid AI technology, partly programmed and partly trained. Alpha-go has only basic programming and depends much more on training big neural nets. This transition minimizes programming time, and allows non-programmers to train AI. Thus, we can expect AI will be capable of doing a larger variety of complex jobs soon. Since many off-road vehicles are already very expensive, the cost of adding AI should be minimal compared to the vehicle costs. I also expect a decrease in development and deployment times as AI begins to help engineers with their AI projects.

 

In reality,the technology will happen incrementally, problems will be experienced, predicted and solved as they happen. Autonomous machinery isn't going to happen all at once. Here's Asmov's three laws:

The number of people working on AI and the decreasing skill level of those training AI worries me a bit. It would seem to increase the risk of a pathological AI "accident." Sooner or later AI will be ridiculously cheap. What would a Ted Kaczynski do with a self-made AI?

Edited by EdEarl

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.