I can agree with StringJunky. Google is and should be thinking ahead of the ethical consequences their Algo's could have. And Google is not the only one. Mankind should be aware that Algo's , certainly in the long term, have the capablitly of misleading humans by giving answers and reactions that can impossibly be valitdated. In the future it is certainly possible that algo's outsmart (some or most) people. What if an algo gives an answer a human being does not understand. Should the human just trust the algo and rely on it's judgement. At the beginning that will give struggles. But for next generations of humanity, we'll get used to it, and obey the algo;s judgments. Humanity should be aware and cherish and nurture healthy criticism. Some deciscions should always be made by responsible humans.
What is happening, imho is at least two things.
1 The interviewer asks the wrong questions in a leading question format.
2 Technical. The algo is based on GPT3, a encoder-decoder neural network. Basically that is a well trained parrot. Parrots copy and 'krauw'.