Genady Posted April 19, 2022 Posted April 19, 2022 On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? (2204.07931.pdf (arxiv.org)) In knowledge-based conversational AI systems, "hallucinations" are responses which are factually invalid, fully or partially. It appears that AI does it a lot. This study investigated where these hallucinations come from. As it turns out, the big source is in the databases used to train these AI systems. On average, the responses, on which the systems are trained, contain about 20% factual information, while the rest is hallucinations (~65%), uncooperative (~5%), or uninformative (~10%). On top of this, it turns out that the systems themselves amplify hallucinations to about 70%, while reducing factual information to about 11%, increasing uncooperative responses to about 12%, and reducing uninformative ones to about 7%. They are getting really human-like, evidently...
Sensei Posted April 19, 2022 Posted April 19, 2022 (edited) 26 minutes ago, iNow said: Do androids dream of electric sheep? ..my Android dreams of replacing the battery and electronics.. and a new screen.. ps. Seriously, damaged electronics responsible for charging the battery and a 7+ year old battery (plug USB-C cable and nothing, no charging).. I connect the wires to the motherboard to make it work.. If you would find it on the street, you would say "completely broken cell phone".. no.. first you have to connect the right wires to the right places on the motherboard, to make it work.. Edited April 19, 2022 by Sensei
exchemist Posted April 19, 2022 Posted April 19, 2022 3 hours ago, iNow said: Do androids dream of electric sheep? Now I lay me down to sleep. Try to count electric sheep. Sweet dream wishes you can keep. How I hate the night. Now the world has gone to bed. Darkness won't engulf my head. I can see by infra-red. How I hate the night. (Marvin, the paranoid android - as if you didn't know..........)
Recommended Posts