The current AIs are LLMs, large language models.
Very summarised, they just soak up data such as text and mathematically record patterns that can be echoed back out.
They know nothing about "truth" or "reality".
Recent cases you could search-up: somebody asks how to keep the cheese on their pizza and are told by an AI to add glue to the sauce. Another asks if it's good to eat rocks and is told by AI that a small rock a day is good. The first was traced to a joke reddit post, the second to an article on The Onion; both on the internet and soaked up in the LLM training.
What this means is any crank can put something out there, that an AI "reads" it and can repeat; it means nothing.
(Edit: was typing this when Mordreds' post arrived, not meaning to detract from that post.)