Yes, it is a conjecture, of course. As long as we have not succeeded, we cannot be sure. But as TheVat already said, it is important to keep an open mind. We do not know what belongs to the essential properties of neurons and how they must be connected to generate consciousness.
And I also think that @Genady is right, that an 'AGI' must have its own means of observing and moving.
TheVat already answered it for me:
Deep Learning is modeled after how neurons are working. The output that ChatGPT is not generated by rules implemented by humans. From Genady's linked article:
If these simplified models of neurons suffice to replicate our mental capabilities, and can lead to consciousness, is an open question. But the output can definitely surprise the programmers. This is not Eliza, or SHRLDU. In these AI-programs, the rules were explicitly programmed. That is why your examples of your python program, thermostats, elevator software, etc simply are a dishonest comparison.
Yep, and you are made of chemicals, that you can buy at the Chemist's.
I let ChatGPT write a small bash-script for me. It did it in a nearly human way: the first version was wrong, I wrote what was wrong, and it came with a better version, but still not quite correct. In the end, the 5th version did exactly what I wanted.
Yesterday I tried it with an elevator, but it did not succeed. So I think I have to call elevator-repair-man...