Jump to content

Recommended Posts

Posted (edited)
Analogy wise, it seems we have already created some AI devices. Governments, companies, various organisations that multiply our individual capabilities. And often we have done it by modeling our own bodies, an organisation usually has a head, and different departments responsible for various functions. Information gathering, planning, resource aquiring, product manufacturing, purpose fullfillment etc. All the things we do. Sayings like "The market has a mind of its own", or the life blood of an organisation, corporate culture, mission statements, allude to the fact that these organisations we put together are AI devices of a sort. We build them with purpose in mind, with functions they are to fullfill, with rules and structures, analogous to our own.

 

Often they fullfill their purpose, but not without wars, and layoffs, and redtape, getting in the way of what an individual that is part of the organisation wishes.

 

And the will of the designer, the people in control of the apparatus, often have their way, at the expense of someone who might wish it was different.

 

We are constantly tweeking the organisations we build. Cycling between regionalization and central control, for instance, as going too far in one way or the other has disadvantages.

 

We might be able to build an AI, but a few things. One, it will be our AI and subject to our whims (and those of its designers and operators). Two, it will not be able to come up with anything that we would listen to, any more than we listen to our government, or our company, or our favorite political party. Where its findings and functions are important and valuable to us, we will go along. Where it fails to suit our purposes we will ignore it, or change it, or seek to unplug it. I would think, anyway, we could go by our history.

 

Regards, TAR

 

 

REPLY: Yes, I also see what you described so well,happening now. In many very real ways anyone who has a computer and can go online has, in many very real ways,already super-human capabilities. And when you connect any number of such indivduals, link them to any given task, a multiplier effect is created to make this linked entity all the more potent and super-human. You also mentioned all the glitches and such these systems are prone to. SNAFUs [ Situation Normal All F,,,ed Up ] you might choose to put it. Anyway, I must disagree with you about being able to control or turn off a true AI unit, a robot is the way I picture them. Being of super-human intelligence , what may seem a sure fire way of controlling them would be analogeous to a dog figuring it had such a wonderfully conceived harness to confine and control us with there is no way we could get out of it. To it`s doggy brain this would seem obvious. But to a human motivated to do so to break free of the dogs control, solutions to the problem of freeing ourselves would soon be found. ...Dr.Syntax

Edited by dr.syntax
Posted
Not really, an artificial intelligence need not in any way be related to our neural system, nor is there any guarantee that we would even understand how the AI works. On the other hand, a proper AI should be able to figure out how to make us smarter. But, we could also skip that step and work on making ourselves smarter directly. We would take longer at that than an AI would at making itself smarter because our generation time is so slow and due to ethical considerations, but in this case we don't need to create an intelligence only improve it.

 

As I have said before it would be very, very difficult to copy something that we don't understand. It can be reasonably assumed that we would have to have a decent understanding before we would have much success in copying it.

Posted
As I have said before it would be very, very difficult to copy something that we don't understand. It can be reasonably assumed that we would have to have a decent understanding before we would have much success in copying it.

 

Not really, animals and dumb students copy things they don't understand all the time.

Posted
Not really, animals and dumb students copy things they don't understand all the time.

 

Copying Text [math]\ne[/math] Reverse engineering

 

Completely different abilities being described, so it's important not to conflate the two.

Posted
Being the eternal optimist that I am, I don't believe it'll ever come to that - if anything we are far more likely to wipe ourselves out than have some rogue AI do it.

 

With the understanding of how to build an AI it's possible we would learn to expand our own intelligence and make ourselves smarter, if we can do that then that would certainly be worth it.

 

 

REPLY: I am no optimist, though I do remain hopeful. I , along with so very many others have been involved in situations where things go terribly wrong. The news is full of such events. I`ll confine myself to a list of technological disasters to include some with malicious intent on the part of people because that is an ever present component of the way things are, now, and in the past. These forces are always at work and influence much of what goes on in this World : the New York trade towers attack and resulting disaster,Chernobyl Nuclear reactor disaster,Pearl Harbor Dec. 7,1941, Hiroshema, Nagasaki, Dresden Germany fire bombing WWII,interstate 35 bridge collapse in Minneapolis Minnesota,Hepatitis-C epidemic related to incredibly foolish blood gathering [ paying indigents $5.00 for a pint of blood ] and then pooling this blood prior to extracting different components. This was the way much of the blood was gathered and processed prior to 1996. This practice was overseen by ? . Many thousands of Viet Nam War veterans contracted this and other deadly diseases because of this incredibly reckless practice from blood transfusions. One out of four units [pints ? ] were contaminated with hepatitis-C . So if you needed even 4 units transfused you were very likely to become infected with Hep-C . The same was true for anyone such as car accident victums and such up until 1996.

I could go on and on. What reason does anyone have to think there is anyone OUT THERE keeping tabs on any of this research and development, even here in the US, and we are certainly not the only players in this game,far from it. Regards, ...Dr.Syntax

Posted
Copying Text [math]\ne[/math] Reverse engineering

 

Completely different abilities being described, so it's important not to conflate the two.

 

Exactly my point.

 

REPLY: I am no optimist, though I do remain hopeful. I , along with so very many others have been involved in situations where things go terribly wrong. The news is full of such events. I`ll confine myself to a list of technological disasters to include some with malicious intent on the part of people because that is an ever present component of the way things are, now, and in the past. These forces are always at work and influence much of what goes on in this World : the New York trade towers attack and resulting disaster,Chernobyl Nuclear reactor disaster,Pearl Harbor Dec. 7,1941, Hiroshema, Nagasaki, Dresden Germany fire bombing WWII,interstate 35 bridge collapse in Minneapolis Minnesota,Hepatitis-C epidemic related to incredibly foolish blood gathering [ paying indigents $5.00 for a pint of blood ] and then pooling this blood prior to extracting different components. This was the way much of the blood was gathered and processed prior to 1996. This practice was overseen by ? . Many thousands of Viet Nam War veterans contracted this and other deadly diseases because of this incredibly reckless practice from blood transfusions. One out of four units [pints ? ] were contaminated with hepatitis-C . So if you needed even 4 units transfused you were very likely to become infected with Hep-C . The same was true for anyone such as car accident victums and such up until 1996.

I could go on and on. What reason does anyone have to think there is anyone OUT THERE keeping tabs on any of this research and development, even here in the US, and we are certainly not the only players in this game,far from it. Regards, ...Dr.Syntax

 

Those are completely different. The incidents you relate are not similar to this at all - none of those were created in the lab to suit a purpose that would come under a category such as an AI.

 

True the nuclear weapons and such were developed in labs but they were designed to be destructive - if an AI were designed to be destructiveness then it would be also and render most of the arguments in this thread worthless. We are discussing evolved nature and how it would affect the AI - not that it was created as a sort of virtual nuclear device.

Posted (edited)
Exactly my point.

 

 

 

Those are completely different. The incidents you relate are not similar to this at all - none of those were created in the lab to suit a purpose that would come under a category such as an AI.

 

True the nuclear weapons and such were developed in labs but they were designed to be destructive - if an AI were designed to be destructiveness then it would be also and render most of the arguments in this thread worthless. We are discussing evolved nature and how it would affect the AI - not that it was created as a sort of virtual nuclear device.

 

REPLY: What reason do you have for believing that AI is some how being created in some laboratory some where by a group of ethically minded engineers and scientists who also don`t make critical errors. My whole point is that is not what history has consistantly recorded as the way that technological advances and the implementation of those technologies occurs in the real World we all live in. Bridges are designed,built,and collapse. A better example of just such a failure would be that suspension bridge that failed do to the unforseen effect of a wind coming at it that generated a resonance effect that caused the bridge to begin undulating to the point of rapid structual failure. The way that blood was collected and processed prior to 1996 guaranteed it would would spread infectious diseases. Any person of moderate intelligence should have been able to identify the recklessness of the way blood was being collected and processed. And yet this is exactly the way it was done for how many decades ?

And how can anyone living in the current era not know that any technology that comes to be will be accessible to those with malicious intent sooner or later,one way or another.

There is less and less control of the military industrial complex President Eisenhower warned the World about as he exited our World`s stage when his Presidency ended. I consider him the most knowledgeable man of the modern era when it comes to just how prone our race is to creating disasterous situations on a massive scale.

I could go on but you either accept my thinking or reject it. There simply is no such thing as control of such monumental technological advances as this one is. It is in the hands of at least some thousands of people who have no particular loyality as a whole to any Nation or set of ethics. The advent of Atomic Bombs is a good example of what I am talking about. A massive ongoing effort to control it`s proliferation has been at work and yet it appears to be spreading at an accelerating pace. I will end on that troubling note. ...Dr.Syntax

Edited by dr.syntax
addition
Posted

Critical errors I will of course accept but I fail to see how that makes it similar to a nuclear weapon which was intended to be destructive.

 

I do agree with your thinking but I think that you need to understand that your examples don't quite go well with your specific reasoning. Nuclear weapons for one were created to destroy and if an AI were designed to do that then most arguments we've presented are worthless.

 

If we talk about emergent qualities then underlying errors and flaws in the programming could be manifested in any number of complex and unpredictably ways but to believe that those would inevitably lead to a destructive AI is a little short sighted.

 

I also agree with you in that if it was a military project it would have far more potential danger than a regular lab project for obvious reasons.

Posted
Critical errors I will of course accept but I fail to see how that makes it similar to a nuclear weapon which was intended to be destructive.

 

I do agree with your thinking but I think that you need to understand that your examples don't quite go well with your specific reasoning. Nuclear weapons for one were created to destroy and if an AI were designed to do that then most arguments we've presented are worthless.

 

If we talk about emergent qualities then underlying errors and flaws in the programming could be manifested in any number of complex and unpredictably ways but to believe that those would inevitably lead to a destructive AI is a little short sighted.

 

I also agree with you in that if it was a military project it would have far more potential danger than a regular lab project for obvious reasons.

 

 

REPLY: I surely hope you are right. I want to go on living and truly do care about the future of mankind. Possibly all of our postings amongst ourselves here in this forum will help in some way to make the people involved in this endevour more aware of the dangers involved and at least in some way influence them in their decisions for the better as this process unfolds. Sincerely, Dr.Syntax

Posted

Unintended consequences of anything we bring into reality, is a sure thing. Sometimes the consequences are welcome, sometimes not. Where not welcome, we establish something that will serve to conteract it, or we remove the initial thing, as unworkable and try something different.

 

Imagining a superintelligence, faced with the fact that anything it does, affects reality, sometimes in unexpected ways, I would guess, the superintelligence would attempt to address this in some way. Perhaps by seeking more information about the universe, running small experiments to see the chain of events that would follow from certain combinations of things, modeling the universe, more and more precisely. Perhaps it would command more and more resources in its attempt to model the momentum and position of every piece of stuff and its arrangement and its relationship to every other piece.

 

How much of our (human's) energy and resources will we allow this machine to consume in its quest? And if it was able to determine courses of action that only consisted of intended consequences, by what rules, to whose and what's advantage and disadvantage would the choice of action be taken?

 

Regards, TAR

Posted
Copying Text [math]\ne[/math] Reverse engineering

 

Completely different abilities being described, so it's important not to conflate the two.

 

Cool, I didn't know animals could copy text. :rolleyes: No, what I'm talking about is that animals can copy actions, and students copy formulas, without understanding how nor why it works. Reverse engineering is exactly the same; we just need to know enough about neurons to copy the "formula" contained in the brain. Obviously, there is no guarantee that we will understand the brain as a whole afterwords, nor that failure to understand the brain as a whole means we can't copy it in the first place.

Posted
Cool, I didn't know animals could copy text.

There have been studies with many non-human primates that show them able to reproduce symbols and draw them themselves. Either way, the idea is that social modeling in a behavioral sense is not the same as reverse engineering a complex machine or system, so despite your (well deserved) snark, my point stands. I tend to agree with your point that there is a degree of overlap between copying behavior and copying a system, but the level of intellectual abstraction required to copy a system is far greater, and hence not equivalent nor as common... despite the obvious overlap on the periphery.

Posted (edited)
So the benefits would be great, but then they outweigh the possible risk of the AI wiping humanity off the face of the Earth?

 

Hello,A Ttripolation, That is not the way I see it. The possible benefits as wonderful they may be do NOT outweigh the risks. And add that as time moves on the chances of something going terribly wrong are always there. I can very easily imagine a scenario in which some individual, or more likely groups of these AIs would soon enough find reasons that would compel them to war against each other. That is the history of not only mankind, but life itself on Earth. The competition for resources, food, a place to live in, and such, is the very essence of what formed us. Without this force at play in the evolutionary history of life on earth all the magnificent creatures that exist today would not exist. This force is utterly indifferent to the well being of people or any other organism .

My point is this: Would it not seem likely that this same competitive force for lack of a better word ,would come into play as these AI units multiplied and diversified ? In the scenario I envision we people would be very lucky if we were allowed to exist at all. Sooner or later one or more groups of these AI units would decide we were getting in their way. After all there are about 6.6 BILLION people here now with that number increasing dramatically.

So I don`t see much of a future at all for mankind. Maybe my mind is in some way altered by my experiences as a "rifleman" is what the Marine Corps called us. An infantryman. I was there, in Viet Nam ,from early Jan. 1968 until mid July 1969. With some time out of Country because I had been so severely wounded in the summer of 1968 that I got sent back to the states and was given a 30 day convalecent leave, sent to Camp Pendleton for exactly the same course of advanced infantry training I had already gone through and returned to Viet Nam. I did have some great times in Tiaquana while training there at Pendleton , and Okinawa also. Okinawa was were you were sent before you were sent to Viet Nam. A way station of a sort, and a magnificently wild and free one, when off base. Everyone, officers included, wanted to pack in some good times, because it may be the last chance they will ever have again. These were wonderful, wild times with the women, prostitutes,that were abundantly available at very reasonable rates.

But I participated in some of the large, by Viet Nam War standards ,battles. We were at times in and amongst each other and you have to kill them before they kill you. There is no mercy for anyone and the penalty for losing is death. I would add to that,at that time,and in the units I served with, we Marines did not get along well with each other ,at all. There were fights happening every day some times, that I was involved in,fighting.

So I either have a distorted view of the World or a more realistic one, depending on how you look at it. Your Friend, ...Dr.Syntax

Edited by dr.syntax
Posted
Goodness no, Dr. Syntax. You misread my question.

Notice the "?" at the end of the sentence. I was asking RyanJ to elaborate on his view that an AI's benefits would outweigh the risk of having an AI.

Here is his post that I mentioned in my post.

 

I do not think a true AI could ever be justified.

I watch too many sci-fi movies. :D

 

 

REPLY: I am sorry I misinterpreted you. I`ve been a bit more confused than is usual for me. Not getting enough sleep and such. It is always good to hear from you, Your Friend, ...Dr.Syntax

Posted

Dr Syntax, what do you think of this reasoning:

A) It is possible to develop a strong intelligence -- humans are an example.

B) If it is possible to develop strong AI, then someone will develop it.

C) A strong AI, when developed, will more likely to seek the goals that the developers had in mind, or similar goals, than it would if developed by some other group.

D) Poorly funded projects can't afford to take precautions.

 

If it is outlawed, it will be developed by criminals. If it is not funded by good people, it is likely to be funded by bad people and also has more risk of being poorly funded. Wouldn't the logical conclusion be that we should make sure to have a well-funded research group with benign objectives, working on AI? Once developed, an AI would make a very effective police to prevent the emergence of rouge AIs (aka competitors), and would also likely be the only thing that could stop a rouge AI.

Posted (edited)
Dr Syntax, what do you think of this reasoning:

A) It is possible to develop a strong intelligence -- humans are an example.

B) If it is possible to develop strong AI, then someone will develop it.

C) A strong AI, when developed, will more likely to seek the goals that the developers had in mind, or similar goals, than it would if developed by some other group.

D) Poorly funded projects can't afford to take precautions.

 

If it is outlawed, it will be developed by criminals. If it is not funded by good people, it is likely to be funded by bad people and also has more risk of being poorly funded. Wouldn't the logical conclusion be that we should make sure to have a well-funded research group with benign objectives, working on AI? Once developed, an AI would make a very effective police to prevent the emergence of rouge AIs (aka competitors), and would also likely be the only thing that could stop a rouge AI.

 

 

 

REPLY: There is no way any of the competitive Nations such as: USA,Japan,China,Germany,France,Russia,Britain, and others are ever going to put themselves at such a technological disadvantage as to outlaw this research. For this reason, this research is absolutely guaranteed to proceed with great vigor.

The only hope I see in all this is that the means are found to control these AI entities as they emerge. I absolutely agree that the best hope we have regarding this inevitable event is to do our best to see to it to the best , most ethically inclined are the first to succeed. I see now, from your asking me this question in your usual solidly based logical manner, that this is the only hope we have as these events unfold. Always good to hear from my friend, Mr Skeptic. Sincerely, Dr.Syntax


Merged post follows:

Consecutive posts merged

The source and basis for much of what is being discussed in this thread is a Wikipedia piece titled : "The Technological Singularity". Because of this article`s importance to understanding what is being discussed here I will now reintroduce the link to that article, which is: [ http://en. wikipedia/wiki/Technological_singularity ]. There it is. If you have not already read or perused this very well presented piece I highly recommend doing so. Links are provided throughout this piece to the different scientists referenced and areas of interest. Also there is a list of references with links to biographies and suc and at the end a list of links to other resources regarding the technological singularity. Sincerely, ...Dr.Syntax

Edited by dr.syntax
Consecutive posts merged.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.