Jump to content

Recommended Posts

Posted
On 4/22/2023 at 12:09 AM, Endy0816 said:

China has been using some as virtual 24/7 'news' anchors. Don't think hooked up to GPT or similar yet though.

Maybe not fully connected yet to their news, but work being done by the Beijing Academy of Artificial Intelligence is already rivaling ChatGPT and Google (at least according to a well positioned President inside Microsoft).

Posted (edited)
21 minutes ago, dimreepr said:

 

 

 

21 minutes ago, dimreepr said:

We've been going round in circle's for nine page's now and I'm running out of way's to say the same thing, a computer doesn't think it compares, much like an automated loom running a perforated card program; there's no reason to think that's alive/sentient/conscious in any sense, even though the loom is much better at the job than a human loom operater.

Essentially your argument is, what if that rock suddenly wakes up.

 

Yeah, and again you are supposing limited human understanding and constraints on a completely different complex intelligent system that is evolving (currently by human design and control). A system that as it develops may gain consciousness that resembles, or not, our own. 

A rock is an inanimate object with no cognitive capability, so there is no comparison. 

Until we gain an understanding of when and how consciousness and or sentience is achieved in biological systems how can we even begin to wonder how/if or when it will come to being in A.I systems?

The only thing we have is that there appears to be a correlation between consciousness and sentience based on complexity and intelligence.  If any of these are major factors in this, then it would be plausible to assume as A.I evolves it could develop such traits, and even in a way that is unrecognisable to human or even biological experiences.    

Edited by Intoscience
spelling
Posted
4 hours ago, Ken Fabian said:

but will an AI see copies and upgrades of itself as itself or as rivals?

This touches on a parallel theme that interests me. What happens when multiple AIs that have trained themselves in silos begin interacting with and learning from one another? There’s a lot of meat on this bone to chew.

Posted
2 minutes ago, Intoscience said:

A rock is an inanimate object with no cognitive capability, so there is no comparison.  

So is a loom, but here we are discussing it's descendants. 

Posted
4 hours ago, Ken Fabian said:

I just don't buy the media fictional version of the unstoppable super hacker, where any system can be broken into and taken over

The bigger risk IMO is how much better these systems will be at exploiting the most massive risk in the systems security chain: us humans.

A simple click on a simple link somewhere with malicious underlying code to infect the users computer has just gone up orders of magnitude in likelihood. It’ll be easier to fool us than it already is on the psychological side of hacking. 

Posted
Just now, dimreepr said:

So is a loom, but here we are discussing it's descendants. 

Difference being that rocks don't evolve in complexity, with any form of cognitive capability. Rocks (though many variety's) remain rocks since the earth solidified 4.5 billion years ago. 

Posted
2 minutes ago, Intoscience said:

Difference being that rocks don't evolve in complexity, with any form of cognitive capability. Rocks (though many variety's) remain rocks since the earth solidified 4.5 billion years ago. 

Way to miss the point...

4 minutes ago, iNow said:

The bigger risk IMO is how much better these systems will be at exploiting the most massive risk in the systems security chain: us humans.

A simple click on a simple link somewhere with malicious underlying code to infect the users computer has just gone up orders of magnitude in likelihood. It’ll be easier to fool us than it already is on the psychological side of hacking. 

But it's not the computer that decides to infect us, and while that may emerge from the complexity of human intention's; the computer remains a tool, that's just as happy to rust in an unused tool box.

Posted
18 minutes ago, dimreepr said:

Way to miss the point...

Since 99.9% of your posts on this entire forum are short cryptic responses is there any wonder?

19 minutes ago, dimreepr said:

But it's not the computer that decides to infect us, and while that may emerge from the complexity of human intention's; the computer remains a tool, that's just as happy to rust in an unused tool box

For now yes, but... as I have pointed out over the last X pages 

Posted (edited)
12 hours ago, Markus Hanke said:

still more disturbing issue arises if you turn this question on its head - will torturing humans be considered unethical by an (sufficiently advanced) AI?

Hard to  imagine this will not be a tool in the locker of vicious criminals  or military  interrogators who will turn over their victims to a real or supposed AI machine as a further "turn of the screw" or just a sadistic  distraction. 

Certainly  in  movies  anyway.

 

Edited by geordief
Posted
15 hours ago, Markus Hanke said:

given the pace at which the field seems to be developing now.

Yes, it is developing in a fast pace, but IMHO the direction of this development is tangential to the evolution we are discussing here. It goes in the direction of efficient emulation of human activities, but there is nothing yet to suggest an emergence of independent intelligence.

Posted
6 hours ago, Genady said:

Yes, it is developing in a fast pace, but IMHO the direction of this development is tangential to the evolution we are discussing here. It goes in the direction of efficient emulation of human activities, but there is nothing yet to suggest an emergence of independent intelligence

I agree, and if the scare mongering was coming from activists or fanatical enthusiasts, then I would not share the interest I do. However there are many experts in the field of A.I who show a concern. This to me suggests that the technology is moving very quickly.

I think also, because we don't really understand the mechanism which brings consciousness and sentience into being within intelligent biological systems, then there is that niggling worry that it may happen in A.I systems outside of our control. So the sensible thing is to try and mitigate the risk without stalling the development. A.I systems are and continue to be great assets and tools for humankind. Lets hope it stays that way.  

Posted
22 hours ago, Intoscience said:

Since 99.9% of your posts on this entire forum are short cryptic responses is there any wonder?

Just ask for clarification, for instance you seem to be misunderstanding what cryptic means, because most of my post's are metaphorical/analogical; I'm not smart enough to provide cryptic clues.

5 hours ago, Intoscience said:

So the sensible thing is to try and mitigate the risk without stalling the development.

How can you mitigate the risk of an emergent quality? Especially one that you can never know if or when it has emerged...

Posted (edited)
21 minutes ago, dimreepr said:

Just ask for clarification, for instance you seem to be misunderstanding what cryptic means, because most of my post's are metaphorical/analogical; I'm not smart enough to provide cryptic clues.

How can you mitigate the risk of an emergent quality? Especially one that you can never know if or when it has emerged...

Well half of your replies seem cryptic to me! I guess I'm just not smart enough to interpret them, maybe i'll get chat GPT to assist me. ;)

How indeed...? This is the crux if/or/and when. How would you suggest consciousness and/or sentience emerges? 

Edited by Intoscience
spelling
Posted
1 minute ago, Intoscience said:

How would you suggest consciousness and/or sentience emerges? 

History would suggest organic evolution, how could we emulate that process, mechanically?

 

Posted (edited)
3 minutes ago, dimreepr said:

History would suggest organic evolution, how could we emulate that process, mechanically?

 

So consciousness and sentience are intrinsic to organics, especially so complex systems? 

By what mechanism do these emerge and when? 

Edited by Intoscience
spelling
Posted
1 minute ago, Genady said:

But computers are electronic rather than mechanical devices.

Indeed, virtual emergence has not emerged... 🧐

8 minutes ago, Intoscience said:

So consciousness and sentience are intrinsic to organics, especially so complex systems? 

By what mechanism do these emerge and when? 

Our best guess is like a weather forecast, we can predict tomorrow's weather with a great deal of accuracy because we have a model based on yesterday; next year is a mistery, because the model isn't real...

Posted
17 hours ago, dimreepr said:

Our best guess is like a weather forecast, we can predict tomorrow's weather with a great deal of accuracy because we have a model based on yesterday; next year is a mistery, because the model isn't real..

I'm confused how you can compare a weather forecast to emerging consciousness? I asked by what mechanism does consciousness emerge, how when and why? Because our best guess so far is that it has some connection to intelligence, in that the more complex an organism gets, mainly its brain function/capability then there is emerging consciousness. So following a similar model and extrapolating that out for A.I, at some point A.I would become conscious, assuming it's not exclusive to organic materials of course.   

Posted
6 hours ago, Intoscience said:

I'm confused how you can compare a weather forecast to emerging consciousness?

I think it's a reasonable analog, an emergent quality is born of complexity, but with the right model we can be reasonably accurate with our prediction of tomorrow's weather; IOW when we start a program, even a complicated one, provided we've got the syntax right, we can predict it won't be sentient tomorrow.

Before you go thinking that that "plays right into my wheelhouse", if we extend the analogy, we can be reasonably accurate when we say that next year's weather will be roughly the same and so on (and let's not go down the climate change complication, it has no place in this thread).

Your uncertainty argument, that future events are eternally possible, doesn't hold true until/unless you find a new variable (previously unseen) into the equation that changes our current understanding.

Posted
1 minute ago, dimreepr said:

I think it's a reasonable analog, an emergent quality is born of complexity, but with the right model we can be reasonably accurate with our prediction of tomorrow's weather; IOW when we start a program, even a complicated one, provided we've got the syntax right, we can predict it won't be sentient tomorrow.

Before you go thinking that that "plays right into my wheelhouse", if we extend the analogy, we can be reasonably accurate when we say that next year's weather will be roughly the same and so on (and let's not go down the climate change complication, it has no place in this thread).

Your uncertainty argument, that future events are eternally possible, doesn't hold true until/unless you find a new variable (previously unseen) into the equation that changes our current understanding.

Though evolving (to some degree) the weather remains fairly consistent and relatively predictable so its unlikely (failing a catastrophe) that the weather mechanism will dramatically change much over the next 100 years (ignoring human influence). However, A.I capability is changing very rapidly as technological progress continues. Your smart phone now is much smarter than it was 15 years ago. If you extrapolate this progress out based on current developmental progression then within 100 years A.I will be far smarter than any human in history, and possibly by an unimaginable margin.  

Obviously this doesn't mean it will become conscious or sentient. But, the possibility is real if we assume the emergence is directly related to intelligence. Agreed, we have not enough understanding of  consciousness and how it comes into existence, hell we can't even agree on what "intelligence" is. So until someone comes up with a clear and verifiable way of measuring/mapping the process of consciousness coming into existence then we are a bit stabbing in the dark. I feel though in this instance, stabbing in the dark is better than not stabbing at all.   

Posted
6 minutes ago, dimreepr said:

I think it's a reasonable analog, an emergent quality is born of complexity, but with the right model we can be reasonably accurate with our prediction of tomorrow's weather; IOW when we start a program, even a complicated one, provided we've got the syntax right, we can predict it won't be sentient tomorrow.

Before you go thinking that that "plays right into my wheelhouse", if we extend the analogy, we can be reasonably accurate when we say that next year's weather will be roughly the same and so on (and let's not go down the climate change complication, it has no place in this thread).

Your uncertainty argument, that future events are eternally possible, doesn't hold true until/unless you find a new variable (previously unseen) into the equation that changes our current understanding.

I think this argument can be greatly simplified if we remember that computer, regardless of its "complexity", evolution, or self-evolution, is and will be, a Turing Machine. The argument says that TM can't be sentient.

Posted
1 minute ago, Intoscience said:

However, A.I capability is changing very rapidly as technological progress continues.

Yes, but the parameters remain the same.

Posted
17 hours ago, Genady said:

I think this argument can be greatly simplified if we remember that computer, regardless of its "complexity", evolution, or self-evolution, is and will be, a Turing Machine. The argument says that TM can't be sentient.

I think the counter to that is, our brains for example in basic physical terms are just complex biological computers. Yet, for some unknown reason consciousness emerges from this system. If consciousness is exclusive to complex biological systems then fine your argument stands. However, we don't understand how-why-when consciousness emerges in the first place, other than it appears to have some connection with complex intelligent systems. 

Posted
4 hours ago, Intoscience said:

I think the counter to that is, our brains for example in basic physical terms are just complex biological computers. Yet, for some unknown reason consciousness emerges from this system.

But the level of complexity is order's of magnitude greater than that of a machine.

4 hours ago, Intoscience said:

If consciousness is exclusive to complex biological systems then fine your argument stands. However, we don't understand how-why-when consciousness emerges in the first place, other than it appears to have some connection with complex intelligent systems. 

You're essentially chasing a ghost because a) currently computers aren't sentient and there is no known way to change that. b) there is no known way to determine if they do.

ATM this speculation is fantasy, and while that may change in the future; it's no different than speculating about FLT, physics says no.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.