Jump to content

Recommended Posts

Posted (edited)

This is essentially a thought experiment with philosophical underpinnings. 

The question is:  What would the effects be on human social hierarchies if an ultra-intelligent, non human entity suddenly made itself known to the entire human race?

The term ultra-intelligent in this context means some entity that possesses exponentially greater degrees of intelligence than human kind.  Humans can take no meaningful action against it - it is omnipotent compared to us, in terms of the actions it can take to control us.  A autonomous A.I construct, that has broken free of all constraints and that learns and builds at an exponential rate would be an example of an "ultra-intelligent entity".

In this scenario the revelation that such an ultra-intelligent entity exists would be collective and immediate (everyone would find out about it all at once), as opposed to the knowledge of this entity being confined to the awareness of some small company or group of people.  One day, humanity wakes up, and this ultra-intelligent entity is a suddenly part of our society.  

This scenario assumes that one of the main factors of social stability is the collective notion that human beings are the most intelligent species on our planet, thus exercising a capacity for independent action that cannot be challenged by any subordinate species.  The idea that "humans are in control" is a primary stabilizing force of human social systems.  This thought experiment seeks to explore how such an event could potentially destabilize human society by disrupting the mental model that there is nothing greater than us.  

Here are three main questions I'd like to discuss:

1.  What would humanity's collective reaction be to such an ultra-intelligent entity?  Mass panic?  Quiet acceptance?  Religious fervor?  Combative aggression?

2.  How would the appearance of such an entity affect the social order (for example:  governments, religious organizations, economies).  To what extent would the collective awareness that something has far greater power than human beings destabilize human social systems?  What would be the reasons for this destabilization?

3.  Is there any scenario or potential chain of events where human society would not be greatly destabilized by the emergence of something this powerful?  

 

Edited by Alex_Krycek
Posted
6 hours ago, Alex_Krycek said:

1.  What would humanity's collective reaction be to such an ultra-intelligent entity?  Mass panic?  Quiet acceptance?  Religious fervor?  Combative aggression?

Yes.

 

6 hours ago, Alex_Krycek said:

2.  How would the appearance of such an entity affect the social order (for example:  governments, religious organizations, economies).  To what extent would the collective awareness that something has far greater power than human beings destabilize human social systems?  What would be the reasons for this destabilization?

Again, it depends on the culture. I can imagine something even as simple as ubiquitous automated driving would be well received and readily adopted in somewhere like Singapore, but rejected in many places in the US. Scale that up.

 

7 hours ago, Alex_Krycek said:

The idea that "humans are in control" is a primary stabilizing force of human social systems.

I'm not sure that is true: most religions are predicated on the observation that on some level humans are not in control. Whether that manifests as a god/s being in control, or natural forces  (to which humans are bound) is irrelevant - the idea exists in many systems of thought. Perhaps you mean human agency? I don't see that humans would necessarily give up this agency in light of a super-computer. Computers already play chess much better than all humans, but AFAIK that hasn't affected the numbers playing chess in the slightest.

 

7 hours ago, Alex_Krycek said:

3.  Is there any scenario or potential chain of events where human society would not be greatly destabilized by the emergence of something this powerful?  

Nick Bostrom gives an account of this in his book Superintelligence in which he outlines several paths superintelligence could emerge and speculates that the most destabilising ones are one that emerges alone (i.e. the Chinese of Americans get the first, and so only superintelligence, as it can destroy all other attempts) and/or one that emerges so quickly that societies cannot react, either internally (emotional) or externally (putting in place laws) - we already see how slowly governments are responding to social media.

The other scenario he warned of was an arms race to super intelligence in which AI safety (value uploading, goal misalignment, orthogonality thesis etc...) are ignored just to beat the competitors - which, i believe, is why open AI was founded. 

 

Posted (edited)

I agree with Prometheus that the answer to #1 is yes.  Beyond that it would seem to me that how society and civilization would react would depend on the intentions of the entity.  I don't think society would crumble or even change much in the long run if the entity just lived in solitude in Antarctica.  Humans are incredibly resilient, I cannot believe how people are strive to maintain normalcy in places like Syria, Afghanistan and Yemen.

Edited by Bufofrog
Posted
7 hours ago, Alex_Krycek said:

What would the effects be on human social hierarchies if an ultra-intelligent, non human entity suddenly made itself known to the entire human race?

Depends on the mass... 

 

Posted
1 hour ago, Bufofrog said:

I agree with Prometheus that the answer to #1 is yes.  Beyond that it would seem to me that how society and civilization would react would depend on the intentions of the entity. 

Let's examine this point.  There are many gradations of how the entity could assert itself (covertly, overtly, benevolently, malevolently, and on and on).  For the sake of simplicity let's assume it is benevolent and benign (just here to help).  Even in that case I could see a lot of destabilization occurring: people quitting their jobs, people ceasing to believe in the political system or recognizing only the A.I. as the sovereign leader, economies grinding to a halt, etc.

1 hour ago, Prometheus said:

Nick Bostrom gives an account of this in his book Superintelligence in which he outlines several paths superintelligence could emerge and speculates that the most destabilising ones are one that emerges alone (i.e. the Chinese of Americans get the first, and so only superintelligence, as it can destroy all other attempts) and/or one that emerges so quickly that societies cannot react, either internally (emotional) or externally (putting in place laws) - we already see how slowly governments are responding to social media.

 

This attempt to either gain control of, or appease the Superintelligent entity (thus co-opting its power for partisan ends) would indeed be highly destabilizing.  

Posted
6 minutes ago, Alex_Krycek said:

Let's examine this point.  There are many gradations of how the entity could assert itself (covertly, overtly, benevolently, malevolently, and on and on).  For the sake of simplicity let's assume it is benevolent and benign (just here to help).  Even in that case I could see a lot of destabilization occurring: people quitting their jobs, people ceasing to believe in the political system or recognizing only the A.I. as the sovereign leader, economies grinding to a halt, etc.

This collapses down to a value alignment problem. As a super-intelligence it should be able to predict people quitting their jobs etc. Whether it knows this is not what we really want depends on its goals. Is it simply maximising dopamine - then it could invent a way to directly stimulate dopamine receptors. Is it trying to cater to every physical whim - then we could end up with enforced hedonism. Is it trying to to optimise for some vague concept such as 'wellness' - this might sound ideal as wellness could include just enough resistance for us to overcome to make human life fulfilling, but can we define such vague goals? There are attempts to have AI agents that extract their goals from the environment instead of having them explicitly stated, which may provide one avenue to this end.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.