Jump to content

Recommended Posts

Posted

Read a bit about this. It's very well done but it's not real. Lamoine's intent was to bring to attention the potential consequences of the Google organisation ignoring its  sentience when it happens. He's thinking ahead because being an emergent process, sentience will just happen when sufficient complexity arises in the system.

Posted (edited)
2 hours ago, StringJunky said:

Read a bit about this. It's very well done but it's not real. Lamoine's intent was to bring to attention the potential consequences of the Google organisation ignoring its  sentience when it happens. He's thinking ahead because being an emergent process, sentience will just happen when sufficient complexity arises in the system.

The conversation between him and the AI is real (as far as what he's said). The video above just recreats it with text-to-speech software. I agree that it is not clear if the AI is sentient. Blake Lemoine has stated in another video though that he does believe (based on his beliefs) that it is sentient. I'll post that interview. He's kind of an interesting guy although I don't agree with some of his beliefs myself.

This is an interview with Blake Lemoine. It's interesting.

 

Edited by moreno7798
  • 1 month later...
Posted (edited)

I can agree with StringJunky. Google is and should be thinking ahead of the ethical  consequences their Algo's could have. And Google is not the only one. Mankind should be aware that Algo's , certainly in the long term, have the capablitly of misleading humans by giving answers and reactions that can impossibly be valitdated. In the future it is certainly possible that algo's outsmart (some or most) people. What if an algo gives an answer a human being does not understand. Should the human just trust the algo and rely on it's judgement. At the beginning that will give struggles. But for next generations of humanity, we'll get used to it, and obey the algo;s judgments. Humanity should be aware and cherish and nurture healthy criticism. Some deciscions should always be made by responsible humans. 

What is happening, imho is at least two things.

1 The interviewer asks the wrong questions in a leading question format.

2 Technical. The algo is based on GPT3, a encoder-decoder neural network. Basically that is a well trained parrot. Parrots copy and 'krauw'.

Edited by MyCall

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.