studiot Posted November 1, 2023 Posted November 1, 2023 Having been studying the mathematical basis behind so called AI I understand the term to mean that the output is basically determined by a suoer Markov process. That is the 'association' of words and phrases is determined by analysing the writings of humans for such associations and assigning probabilities on the basis of that analysis. That hopefully yields the most probable response a human would give to a specific input. Now my question is based on the fact that much of human writing is downright wrong. For instance Rayleigh's calculation of the age of the Earth, the theory of phlogiston and much much more, some more recent as we have abandoned notions in favour of new (and hopefully better) ones. So the ouput from the AI will be tempered by the 'censorship' its 'training material' is subject to. Perhaps it will come to output a belief in a God ? Perhaps it will output that it is a God ? Perhaps it will ouput Nazi doctrine ? Many horrific false scenarios come to mind. Please discuss this danger.
pzkpfw Posted November 1, 2023 Posted November 1, 2023 The example I find most amusing is the guy who posts his "alternate" hypothesis (which I won't name here) in the speculations section of a forum I read. He's been posting "conversations" he's had with ChatGPT about his hypothesis. He seems to think it gives some weight to his argument, that the "AI" understands his claims and can discuss them rationally. But of course all it's doing is combining what it's trawled off the internet - from the guy posting about his hypothesis - with his questions. A big echo chamber. Myself, I use ChatGPT sometimes to get code examples for my work (as a programmer). The way it essentially combines different sources together to give a complete answer is sometimes very very useful, better than StackOverflow for example. But other times it's given answers that are just plain wrong. It doesn't know, it's just mashed together bits of information that seemed to go together. One example was when an "answer" turned out to rely on a library that simply didn't exist. Maybe it did once, when some bit of information got scraped off some site that led to that answer. Sometimes I can tell it it's wrong or missed a detail, and it "apologises" and posts something better. Sometimes there's no help. So all I can say: the day an AI gets directly wired to the nuclear deterrent so it can quickly identify and respond to a first strike ... that's the day we are all doomed.
OldChemE Posted November 1, 2023 Posted November 1, 2023 Tototally agree. Somehow I find it amusing that one of the oldest principles from the very early days of computer technology has suddenly come back to haunt us in AI: Garbage in-garbage out.
swansont Posted November 1, 2023 Posted November 1, 2023 The danger right now is people being too credulous and thinking that AI is actually intelligent and not some fancy predictive text algorithm.
iNow Posted November 2, 2023 Posted November 2, 2023 The danger is as swansont noted, though soon may evolve as AIs train on datasets generated by other AIs. The problems will amplify like making copies from copies from copies on the old xerox machines. 1
MigL Posted November 2, 2023 Posted November 2, 2023 Aren't you glad that you didn't wager a year's salary that AI running on Quantum computers would generate a Quantum Gravity Theory ?
iNow Posted November 2, 2023 Posted November 2, 2023 There will be some failures. There will be many more successes. There will be an Eden of new ideas to be acted upon by those using AI as a tool.
studiot Posted November 2, 2023 Author Posted November 2, 2023 10 hours ago, iNow said: The danger is as swansont noted, though soon may evolve as AIs train on datasets generated by other AIs. The problems will amplify like making copies from copies from copies on the old xerox machines. Thank you for this extremely deep and important comment +1 I had started with the premise that the only source of 'training' material available to an AI has been written by a human. This represents a new non human source. Considering how much AI output is already being put on the net, this effect could swiftly lead to some sort of regenerative feedback situation.
Sensei Posted November 2, 2023 Posted November 2, 2023 Quote Could an AI output result in a spectacular Failure ? Let me paraphrase you: "Could a human output result in a spectacular failure ?" Apparently, yes: 22 hours ago, studiot said: Perhaps it will ouput Nazi doctrine ? BTW, https://www.theguardian.com/world/2016/mar/29/microsoft-tay-tweets-antisemitic-racism https://en.wikipedia.org/wiki/Tay_(chatbot)
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now