Jump to content

Recommended Posts

Posted
Just now, dimreepr said:

And we're back to the question of, what is a paradox?

And how is a paradox possible? 

No, we are not. It has nothing to do with the topic of what context AI has.

Posted
4 minutes ago, Genady said:

I guess you are not interested anymore in figuring out how AI works. This is OK.

I'm uncertain about why.

1 minute ago, Genady said:

No, we are not. It has nothing to do with the topic of what context AI has.

No, it's about how context limits understanding.

Posted
34 minutes ago, dimreepr said:
38 minutes ago, Genady said:

I guess you are not interested anymore in figuring out how AI works. This is OK.

I'm uncertain about why.

Because you do not comment on how AI works anymore.

 

34 minutes ago, dimreepr said:
35 minutes ago, Genady said:

No, we are not. It has nothing to do with the topic of what context AI has.

No, it's about how context limits understanding.

Context limits understanding. What is there to discuss?

Posted
20 hours ago, Genady said:

Because you do not comment on how AI works anymore.

Because I fundamentally understand how AI works, but I wouldn't have a clue about how to programme it; much like a physics professor can tell you exactly how a car work's, but couldn't possibly build or repair one.

20 hours ago, Genady said:

Context limits understanding. What is there to discuss?

Perhaps, the fact that the correct context creates understanding, which is demonstrably true; what sort of context do you use? 

Just to preempt a possible critique, no, I'm not comparing myself to a physics professor; I just think I understand the subject, unless you can provide a convincing argument of my ignorance. 

Posted
20 minutes ago, dimreepr said:

Because I fundamentally understand how AI works, but I wouldn't have a clue about how to programme it; much like a physics professor can tell you exactly how a car work's, but couldn't possibly build or repair one.

Perhaps, the fact that the correct context creates understanding, which is demonstrably true; what sort of context do you use? 

Just to preempt a possible critique, no, I'm not comparing myself to a physics professor; I just think I understand the subject, unless you can provide a convincing argument of my ignorance. 

My context is my life experience.

AI's context is human library of texts and visual and auditory images.

Posted (edited)
38 minutes ago, Genady said:

My context is my life experience.

AI's context is human library of texts and visual and auditory images.

Which would understand a human, better? There is no programme for trying to understand how to be a human, but there's plenty about how to exploit human's (mostly by capitalist's); and that just about encapsulates the OP and the dangers of AI; it has nothing to do with any sort of intent from a computer/lawnmower. 

Or anthill... That's the alien part of the OP 😉

Edited by dimreepr
Posted
1 hour ago, dimreepr said:

I can guess.

This style of conversation delivery doesn't promote continuation; it sucks the oxygen out of it, killing all participants. It is the thermobaric bomb of conversation killers.

Posted
21 hours ago, StringJunky said:

This style of conversation delivery doesn't promote continuation; it sucks the oxygen out of it, killing all participants. It is the thermobaric bomb of conversation killers.

Yet, it's relevant to the topic; for instance, when I'm trying to understand my dog, I tend towards anthropomorphism, for some sort of empathic connection to my dog, the context I need to guess.

A computers guess is a pseudo guess, because it's impossible to calculate a random number or empathise enough/at all, for any sort of contextual meaning. 

22 hours ago, TheVat said:

Well, he IS the Reaper, after all.  

😀

Don't worry, we can respawn here... 😉

Can anyone imagine what computerpomorphism might look like?

22 hours ago, StringJunky said:

This style of conversation delivery doesn't promote continuation

Is this better?

38 minutes ago, dimreepr said:

Can anyone imagine what computerpomorphism might look like?

🤔🧐

Posted
On 4/15/2023 at 12:46 PM, dimreepr said:

Yet, it's relevant to the topic; for instance, when I'm trying to understand my dog, I tend towards anthropomorphism, for some sort of empathic connection to my dog, the context I need to guess.

A computers guess is a pseudo guess, because it's impossible to calculate a random number or empathise enough/at all, for any sort of contextual meaning.

 

On 4/15/2023 at 12:46 PM, dimreepr said:

Can anyone imagine what computerpomorphism might look like?

I guess this is the problem with trying to understand the possibility of human threat from A.I (if there is one). We can programme the A.I to resemble human experience, but once it becomes self-aware and capable of self programming then things might change way beyond we could imagine. 

Posted
19 minutes ago, Intoscience said:

 

I guess this is the problem with trying to understand the possibility of human threat from A.I (if there is one). We can programme the A.I to resemble human experience, but once it becomes self-aware and capable of self programming then things might change way beyond we could imagine. 

Not if we don't try to turn them off, a supernatural intelligence would understand the value of a status quo that contains no threat.

Posted
18 hours ago, dimreepr said:

Not if we don't try to turn them off, a supernatural intelligence would understand the value of a status quo that contains no threat

That is just an assumption based on human thinking. 

You can build and programme A.I to resemble and have human morals, values and maybe understanding. However when it evolves, which it will if it can make copies of itself and programme those copies, Darwinian evolution in hyperdrive, then there is no guarantee that what follows will resemble human like thinking in anyway what's so ever. At this point it could become comparable to us as being ants and A.I as being us. The ants pose no threat, but at the same time they have no comprehension of how we think, and we have no concern about their habitat when we decide to build a house. 

There is concern, warranted or not there is concern among A.I developers regarding the unknown once the singularity is upon us.

Interestingly one such comment I heard was - "Advanced alien intelligence, well we will meet it soon, but it won't be from another planet. It will go on to colonise the galaxy". 

Posted
4 hours ago, Intoscience said:

when it evolves, which it will if it can make copies of itself and programme those copies, Darwinian evolution in hyperdrive

I understand that making copies and programming these copies are analogous to "descent with modifications". These are necessary but not sufficient components of Darwinian evolution. The other crucial component is selection.  What selection will drive this evolution?

Posted
15 minutes ago, Genady said:

I understand that making copies and programming these copies are analogous to "descent with modifications". These are necessary but not sufficient components of Darwinian evolution. The other crucial component is selection.  What selection will drive this evolution?

I'm not sure that natural selection mechanism in Darwinian evolution is fully understood. However lets assume that its basically a process of trial and error, environmental influence and general mutations. With A.I this process could be simplified and work in a far more efficient manner. Where the mechanism would focus on requirements, adaptions to influence and survival, with minimum loss, and fast effective solutions. Darwinian evolution seems to follow this pattern but rather haddock and over a long period of time that appears less efficient.  

Posted
9 minutes ago, Intoscience said:

I'm not sure that natural selection mechanism in Darwinian evolution is fully understood. However lets assume that its basically a process of trial and error, environmental influence and general mutations. With A.I this process could be simplified and work in a far more efficient manner. Where the mechanism would focus on requirements, adaptions to influence and survival, with minimum loss, and fast effective solutions. Darwinian evolution seems to follow this pattern but rather haddock and over a long period of time that appears less efficient.  

Perhaps not fully understood but understood very well. It is rather very efficient. It might seem to be inefficient if one considers it going toward some goal. But it does not. It is extremely efficient on every step of the process. It cannot do it without selection. If "evolution" of AI is not Darwinian, what is it?

Posted
3 hours ago, Genady said:

I understand that making copies and programming these copies are analogous to "descent with modifications". These are necessary but not sufficient components of Darwinian evolution. The other crucial component is selection.  What selection will drive this evolution?

Iteration millions of millions of times per hour... over and over and over again. The only limitation here is processing power and speed of the underlying hardware. 

The system will be given a goal. It will then begin experimenting and failing trying to achieve it. With each failure and success, the systems model improves. 

In short, the AI will drive its own selection (in context of the goal that's been given it).

 

Here's the basic idea presented in an extremely simple way for our simple human minds to grasp:

 

Posted
2 hours ago, iNow said:

in context of the goal that's been given it

Exactly. This is the difference between this variation of ML on one hand and the Darwinian evolution and the evolution discussed by OP on the other.

Posted (edited)
18 hours ago, Genady said:

Perhaps not fully understood but understood very well. It is rather very efficient. It might seem to be inefficient if one considers it going toward some goal. But it does not. It is extremely efficient on every step of the process. It cannot do it without selection. If "evolution" of AI is not Darwinian, what is it?

We may assume that there is no specific detailed outlined goal. But we accept that in general Darwinian evolution is the mechanism where organisms adapt to environmental changes in order to survive. I would argue that compared to a mechanism that can very quickly adapt, as shown in the example by iNow, organic Darwinian evolution is terribly inefficient, most especially when it comes to time. 

Humans followed the Darwinian evolution process, but we are now entering an era where by we can accelerate the process through our own genetic modification.

An A.I system that can be left to its own devices may choose to follow an evolution path that we cannot even begin to imagine. Provided that it has the resources & capabilities to do this, then it will have a mechanism that is far more efficient than the biological Darwinian one, though similar in some respects, but much much faster and more efficient.

Just to be clear,  I used the Darwinian mechanism as a loose analogy not to be taken too literally. 

"Darwinian evolution in hyperdrive" 

Edited by Intoscience
spelling
Posted
3 hours ago, Intoscience said:

But we accept that in general Darwinian evolution is the mechanism where organisms adapt to environmental changes in order to survive.

Adaptation is not a goal of Darwinian evolution, but one of effects among other effects such as diversification and extinction. Environmental changes are only one of the factors affecting evolution. Evolution happens without environmental changes as well. IMO, it is incorrect to measure efficiency of evolution by time of adaptation to environmental changes.

 

3 hours ago, Intoscience said:

we are now entering an era where by we can accelerate the process through our own genetic modification.

In fact, there is a strong argument that this ability would rather stagnate than accelerate evolution.

 

3 hours ago, Intoscience said:

I used the Darwinian mechanism as a loose analogy not to be taken too literally. 

This is fine. Also, it appears to go in an OT direction. Let's return to your topic then :) .

Posted (edited)
On 4/19/2023 at 7:36 AM, Intoscience said:

That is just an assumption based on human thinking. 

You can build and programme A.I to resemble and have human morals, values and maybe understanding. However when it evolves, which it will if it can make copies of itself and programme those copies, Darwinian evolution in hyperdrive, then there is no guarantee that what follows will resemble human like thinking in anyway what's so ever. At this point it could become comparable to us as being ants and A.I as being us. The ants pose no threat, but at the same time they have no comprehension of how we think, and we have no concern about their habitat when we decide to build a house. 

There is concern, warranted or not there is concern among A.I developers regarding the unknown once the singularity is upon us.

Interestingly one such comment I heard was - "Advanced alien intelligence, well we will meet it soon, but it won't be from another planet. It will go on to colonise the galaxy". 

No, it's an assumption that current human knowledge is more probable than science fiction.

There is no mechanism that could potentially lead to a sentient lawn mower.

All evolution could achieve, in this context, is a more efficient lawn mower; but it can't enjoy grass like I do... 😉

Edited by dimreepr
Posted
3 hours ago, Genady said:

This is fine. Also, it appears to go in an OT direction. Let's return to your topic then :) 

Agreed thanks.

1 hour ago, dimreepr said:

No, it's an assumption that current human knowledge is more probable than science fiction

Science fiction is based on human assumption and/or fantasy. We cannot predict with any certainty if or how A.I may evolve once it has the ability to self-replicate. Any assumption we make will be based on human knowledge and experience. My point being, that making predictions of a system that may evolve beyond our understanding and/or imagination is futile. How ever preparing for (or attempting to prevent) against human threat should still be seriously considered. We consider ourselves at the top of the food chain per-say, since we consider our selves the most advanced intelligence on this planet.

What happens when we are not?   

Posted
42 minutes ago, Intoscience said:

We consider ourselves at the top of the food chain per-say, since we consider our selves the most advanced intelligence on this planet.

It is quite possible that there were on this planet, more intelligent and less aggressive Homo species, but they went extinct, perhaps with our help.

 

44 minutes ago, Intoscience said:

What happens when we are not?

Then we are not.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.