Jump to content

Recommended Posts

Posted
21 hours ago, TheVat said:

He's comparing the pursuit of GAI to Captain Ahab's obsessive pursuit of Moby Dick, the great white whale.  The whale ends up eating him.  Not sure that analogy works well, but I sorta can see it.  

It's the doom and gloom version of a future, on an otherwise successful voyage.

my point being, Ahab's obsession meant he cut corners and failed to prepare for the discovery.

We can dismiss GAI, right now, because it seems so unlikely; while we cut corners on the voyage of AI... 

Just wondering, but how can humanity be aware of post-humanity? Be fantastic if we could...

Posted
33 minutes ago, dimreepr said:

how can humanity be aware of post-humanity?

Like this: the Sun will become a red giant, then it will blow most of its material away, then it will become a white dwarf and will stay in that form for many billions of years.

Posted
1 hour ago, Genady said:

Like this: the Sun will become a red giant, then it will blow most of its material away, then it will become a white dwarf and will stay in that form for many billions of years.

Well, if you're sure, I'll tell my family not to worry...

Posted
12 minutes ago, dimreepr said:

Well, if you're sure, I'll tell my family not to worry...

As sure as the Sun will set tonight.

Posted
On 3/15/2023 at 12:27 PM, dimreepr said:

Just wondering, but how can humanity be aware of post-humanity? Be fantastic if we could

It's not about, awareness. It's about the possibility of human extinction and what if any remnants of our existence survives and maybe even continues to prosper.  

Lets say for example we are successful in producing some form of G.A.I that is aware, or at least has the capability of consideration and replication/reproduction.. 

If we are never successful in leaving our planet and colonising others then we are inevitably going to become extinct. Unless we find a way of saving our planet, from the sun's inflation or preventing large meteorite strikes, then I see no alternative other than to move from Earth. All this assuming in the meantime we don't destruct ourselves or outgrow our resources or die from infection... 

Posted
21 hours ago, Intoscience said:

It's not about, awareness. It's about the possibility of human extinction and what if any remnants of our existence survives and maybe even continues to prosper. 

But if any of us survive and continue to prosper, then it's not post-humanity, it's post-modern.

What I think you mean is, how do we navigate the minefields of the future?

21 hours ago, Intoscience said:

Lets say for example we are successful in producing some form of G.A.I that is aware, or at least has the capability of consideration and replication/reproduction.. 

Why is it's awareness, important?

If we do manage to develope GAI, it's not it's sentience that we should fear, it's the human bias we've unwittingly programmed into it.

21 hours ago, Intoscience said:

If we are never successful in leaving our planet and colonising others then we are inevitably going to become extinct. Unless we find a way of saving our planet, from the sun's inflation or preventing large meteorite strikes, then I see no alternative other than to move from Earth. All this assuming in the meantime we don't destruct ourselves or outgrow our resources or die from infection... 

Everything dies!!! "The restaurant at the end of the universe" is just one of seven impossible things to do before breakfast... 😉

On 3/15/2023 at 2:35 PM, Genady said:

As sure as the Sun will set tonight.

Well, as long as you're sure you'll see it, tonight... 

Posted (edited)
On 3/17/2023 at 12:56 PM, dimreepr said:

Why is it's awareness, important?

If we do manage to develope GAI, it's not it's sentience that we should fear, it's the human bias we've unwittingly programmed into it

Self-awareness is important, also is the ability to consider values. If G.A.I becomes self learning then what ever we initially program into will be futile. 

A self-aware self learning entity, that is potentially smarter and more powerful than we are is a very concerning threat to humanity.  This is what I meant by post humanity. 

If a smarter more advanced entity doesn't value human life then it will either do 1 of 3 things. 1. It will unwittingly eradicate human existence. 2. It will wittingly eradicate human existence. 3. It will just ignore human existence. 

If it does value human life then it may do 1 of 2 things. 1. It will ignore human existence. 2. It will aid in the survival of human existence.  

Edited by Intoscience
spelling
Posted
3 hours ago, Intoscience said:

Self-awareness is important, also is the ability to consider values.

The two aren't mutually exclusive, for instance, there is as much chance of self repairing concrete becoming sentient, as AI or GAI.

3 hours ago, Intoscience said:

If G.A.I becomes self learning then what ever we initially program into will be futile. 

We already have self learning AI, deep blue et al, in which the initial program is essential for it to win a game of chess, which is the objective WE gave it.

3 hours ago, Intoscience said:

A self-aware self learning entity, that is potentially smarter and more powerful than we are is a very concerning threat to humanity.

As I've previously mentioned AI doesn't think like us, it's like thinking an IQ test is an accurate metric of smart.

The threat to humanity is from human's doing a captain Ahab, not from the whale doing a Moby.

Posted
17 hours ago, dimreepr said:

We already have self learning AI, deep blue et al, in which the initial program is essential for it to win a game of chess, which is the objective WE gave it

This is where self awareness may play a role. 

17 hours ago, dimreepr said:

As I've previously mentioned AI doesn't think like us, it's like thinking an IQ test is an accurate metric of smart

It doesn't have to, to still pose a threat.

We don't think like ants, but we may pose a threat to them wittingly or unwittingly so.

Posted
5 hours ago, Intoscience said:
23 hours ago, dimreepr said:

We already have self learning AI, deep blue et al, in which the initial program is essential for it to win a game of chess, which is the objective WE gave it

This is where self awareness may play a role. 

How? 

Sentience isn't going to suddenly make it think like us, for instance, what would that consciousness even look like, what is its motivation to do anything? Another topic I think.

5 hours ago, Intoscience said:

It doesn't have to, to still pose a threat.

We don't think like ants, but we may pose a threat to them wittingly or unwittingly so.

What's its motivation? 

We don't like ants because they pose a threat to our comfort; if they're not stinging us they're pinching our picnics.

How would we piss off a GAI enough for it to not want us around? 

Posted (edited)
33 minutes ago, dimreepr said:

Sentience isn't going to suddenly make it think like us, for instance, what would that consciousness even look like, what is its motivation to do anything? Another topic I think

Why are you citing that I ever stated "thinking like us" You are misunderstanding me. Why would they/it have to have motivation in any way that resembles ours? the only one I can think of is the same for all "life" - Survival

34 minutes ago, dimreepr said:

What's its motivation?

Survival

34 minutes ago, dimreepr said:

We don't like ants because they pose a threat to our comfort; if they're not stinging us they're pinching our picnics.

How would we piss off a GAI enough for it to not want us around?

It's not even about liking or disliking ants, you missed the point of the analogy. 

It doesn't have to be about whether we pissed them off or not. Its whether they value or notice us enough to care whether we exist or not.

Edited by Intoscience
spelling
Posted
23 hours ago, Intoscience said:

Why are you citing that I ever stated "thinking like us" You are misunderstanding me. Why would they/it have to have motivation in any way that resembles ours? the only one I can think of is the same for all "life" - Survival

I'm trying to understand what part you think the sentience, of a machine, has to play in the danger to humanity, of GAI.

It's within our power to control AI, by designing its objectives correctly, in the first place; for instance, say we design a machine to find a way to eliminate the acidification of the ocean's, a noble cause designed to save humanity from itself. We set it going, but the machines solution is a chemical reaction that uses all the oxygen in the atmosphere. 

We scramble to switch it off, before it starts the reaction; so it becomes a race for survival.

The solution is, we design the algorithm to ask us for permission to start the solution; yes, it's simplistic.

The question we have to ask ourselves is, why do we want a fully autonomous machine?

On 3/21/2023 at 12:54 PM, Intoscience said:

It's not even about liking or disliking ants, you missed the point of the analogy. 

It doesn't have to be about whether we pissed them off or not. Its whether they value or notice us enough to care whether we exist or not.

Self repairing concrete is fine because the threat of a sentient fourth bridge, only leads to Scotland...

 

Posted

When I read chats about the threat of AI, and it turns to actual rogue machines bent on harm, the phrase "air gap" pops into my mind.  

  • 2 weeks later...
Posted (edited)

I'm now wondering, there is a call from a few leading A.I development companies for a 6 month cull on A.I development.

If there is no threat, then why the concern?

Films like terminator sensationalise such threats making them dramatic etc... this puts a stigma and aids in ridicule.

 

On 3/22/2023 at 1:42 PM, dimreepr said:

The question we have to ask ourselves is, why do we want a fully autonomous machine?

I'm trying to understand what part you think the sentience, of a machine, has to play in the danger to humanity, of GAI.

It's within our power to control AI, by designing its objectives correctly, in the first place; for instance, say we design a machine to find a way to eliminate the acidification of the ocean's, a noble cause designed to save humanity from itself. We set it going, but the machines solution is a chemical reaction that uses all the oxygen in the atmosphere. 

 

The potential threat comes when the A.I has the ability of self-awareness along within self-learning. At this juncture we may have no control, have no idea on it's thoughts, values, agendas... 

 

Edited by Intoscience
spelling
Posted
1 hour ago, Intoscience said:

I'm now wondering, there is a call from a few leading A.I development companies for a 6 month cull on A.I development.

If there is no threat, then why the concern?

Have you got a link to that story? I can't comment on the concern without context.

1 hour ago, Intoscience said:

The potential threat comes when the A.I has the ability of self-awareness along within self-learning.

How can you tell that your computer/phone isn't already sentient?  

Posted
11 minutes ago, dimreepr said:

Have you got a link to that story? I can't comment on the concern without context.

How can you tell that your computer/phone isn't already sentient?  

https://www.npr.org/2023/03/29/1166896809/tech-leaders-urge-a-pause-in-the-out-of-control-artificial-intelligence-race#:~:text="We call on all AI,in and institute a moratorium."

It appears there is some misinformation, some scepticism and probably over dramatization. So maybe not a big deal or even just a PR stunt.

I often wonder about my phone, I certainly get advertisements come up on apps that have context based on my recently discussed subjects. 🤨🙃 

Posted
56 minutes ago, Intoscience said:

https://www.npr.org/2023/03/29/1166896809/tech-leaders-urge-a-pause-in-the-out-of-control-artificial-intelligence-race#:~:text="We call on all AI,in and institute a moratorium."

It appears there is some misinformation, some scepticism and probably over dramatization. So maybe not a big deal or even just a PR stunt.

I often wonder about my phone, I certainly get advertisements come up on apps that have context based on my recently discussed subjects. 🤨🙃 

My point is, a computer is a machine designed to achieve the objectives we give it, AI is a way to do it better; if you plug in the wrong programme you'll get the wrong answer's and if you plug it in to a machine 10,000 times more powerful, all you'll get is the wrong answer, faster... 😉

Which also covers sentience, because for every mad scientist villain, there's a Batman... 🙏🤞 

Posted
18 minutes ago, dimreepr said:

My point is, a computer is a machine designed to achieve the objectives we give it, AI is a way to do it better; if you plug in the wrong programme you'll get the wrong answer's and if you plug it in to a machine 10,000 times more powerful, all you'll get is the wrong answer, faster... 😉

Which also covers sentience, because for every mad scientist villain, there's a Batman... 🙏🤞 

It's a tool and it is up to people what to do with it. This tool can make bad things easier, faster, cheaper to do. Imagine scientific journal boards flooded with plagiarized and fake but well composed manuscripts. SFn flooded with fake science news, machine generated comments, political propaganda etc. Google search returning false results from non-existing sources... These are just a few innocent examples.

Posted
1 minute ago, Genady said:

It's a tool and it is up to people what to do with it. This tool can make bad things easier, faster, cheaper to do. Imagine scientific journal boards flooded with plagiarized and fake but well composed manuscripts. SFn flooded with fake science news, machine generated comments, political propaganda etc. Google search returning false results from non-existing sources... These are just a few innocent examples.

I'm not sure what to make of this, are you for or against my position on this question?

Posted
8 minutes ago, Genady said:

It's a tool and it is up to people what to do with it. This tool can make bad things easier, faster, cheaper to do. Imagine scientific journal boards flooded with plagiarized and fake but well composed manuscripts. SFn flooded with fake science news, machine generated comments, political propaganda etc. Google search returning false results from non-existing sources... These are just a few innocent examples.

Yes, so the threat may not necessarily be the genocidal armageddon as sensationalised by sci-fi movies.  The threat to humans could be far more subtle but potentially just as devastating in the long run. The potential of multiple network failures that hold key positions for social structures. Then there is the potential to bug the system enough to get people and even nations to turn against each other. 

10 minutes ago, dimreepr said:

I'm not sure what to make of this, are you for or against my position on this question?

I don't think its a matter of for and against, it's more a matter of agreement on the level of the threat, the reality of such and when.  

Posted
7 minutes ago, dimreepr said:

I'm not sure what to make of this, are you for or against my position on this question?

My point is that it is not a question of a wrong program but of a wrong use of it.

Posted
43 minutes ago, dimreepr said:

Which also covers sentience, because for every mad scientist villain, there's a Batman... 🙏🤞

The only trouble is in the movies most of the time Batman wins, reality maybe a different matter. 

Posted
1 minute ago, Genady said:

My point is that it is not a question of a wrong program but of a wrong use of it.

I guess that depends on the spelling and meaning of programme. 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.