-
Posts
5389 -
Joined
-
Days Won
52
Content Type
Profiles
Forums
Events
Everything posted by Genady
-
The Consciousness Question (If such a question really exists)
Genady replied to geordief's topic in General Philosophy
You're right, perhaps consciousness is different. More fundamental and, perhaps, more objective. For example, it may be an ability of brain to take some of its own processes as input, while brains without consciousness process only inputs that arrive from elsewhere. In such case, we might eventually find out what brain structures provide this ability and then could look for similar structures in other creatures. -
Let me be the first to announce the birth of a new science. Lee Smolin et al. explain it in a new paper, Biocosmology: Towards the birth of a new science.
-
The impact is obvious on this small island: the measures up - in 1-2 weeks the numbers down, the measures down - in 1-2 weeks the numbers up.
-
Many thanks. +1
-
The Consciousness Question (If such a question really exists)
Genady replied to geordief's topic in General Philosophy
Remember our discussion about free will a couple of months ago? My resolution is the same: Just different reference frames. -
The Consciousness Question (If such a question really exists)
Genady replied to geordief's topic in General Philosophy
This crawling neutrophil appears to be consciously chasing that bacterium: -
I've thought of a test for understanding human speech by an AI system: give it a short story and ask questions which require an interpretation of the story rather than finding an answer in it. For example*, consider this human conversation: Carol: Are you coming to the party tonight? Lara: I’ve got an exam tomorrow. On the face of it, Lara’s statement is not an answer to Carol’s question. Lara doesn’t say Yes or No. Yet Carol will interpret the statement as meaning “No” or “Probably not.” Carol can work out that “exam tomorrow” involves “study tonight,” and “study tonight” precludes “party tonight.” Thus, Lara’s response is not just a statement about tomorrow’s activities, it contains an answer and a reasoning concerning tonight’s activities. To see if an AI system understands it, ask for example: Is Lara's reply an answer to Carol's question? Is Lara going to the party tonight, Yes or No? etc. I didn't see this kind of test in natural language processing systems. If anyone knows something similar, please let me know. *This example is from Yule, George. The Study of Language, 2020.
-
The Consciousness Question (If such a question really exists)
Genady replied to geordief's topic in General Philosophy
It seems to me that the question shifts then to, "what constitutes a thing"? -
We cannot explain to other humans the meaning of finite numbers either. How do you explain the meaning of "two"?
-
I don't know where it is , but I've heard it many time from mods: "Rule 2.7 requires the discussion to take place here ("material for discussion must be posted")"
-
The Consciousness Question (If such a question really exists)
Genady replied to geordief's topic in General Philosophy
Is such a test needed? Isn't everything conscious? -
I remember, I had it, too. Except it was called something else. Don't remember what, but it was in Cyrillic Metal parts looked exactly the same, but the architectural elements that look plastic here, were wooden pieces in my case. Even better look and feel that way. My mother was an architect and my father was a construction engineer - they made sure I got such stuff...
-
On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? (2204.07931.pdf (arxiv.org)) In knowledge-based conversational AI systems, "hallucinations" are responses which are factually invalid, fully or partially. It appears that AI does it a lot. This study investigated where these hallucinations come from. As it turns out, the big source is in the databases used to train these AI systems. On average, the responses, on which the systems are trained, contain about 20% factual information, while the rest is hallucinations (~65%), uncooperative (~5%), or uninformative (~10%). On top of this, it turns out that the systems themselves amplify hallucinations to about 70%, while reducing factual information to about 11%, increasing uncooperative responses to about 12%, and reducing uninformative ones to about 7%. They are getting really human-like, evidently...
-
OK, it might constitute a part of the solution. Like hair is a part of dog.
-
I don't think that a substrate matters in principle, although it might matter for implementation. I think intelligence can be artificial. But I think that we are nowhere near it, and that current AI with its current machine learning engine does not bring us any closer to it.
-
Unless all these programs are already installed in the same computer.
-
Yes, this is a known concern.
-
But I didn't say, DNA.
-
I think I can program in random replication errors. Maybe I don't understand what you mean here
-
(continuing my devil's advocate mission...) 1. The web, clouds, distributed computing, etc. are the environments of lots of cooperating computers, aren't they? 2. Hmm, I can't think of a good enough computer analogy of this... 3. Computers can self-replicate, at least in principle. (BTW, it took me some time to figure out what is paradoxical in your jigsaw example. But I did. I think, AI could deal with this kind of language vagueness, given enough examples.) (I gave the statement "My jigsaw has a missing piece" to Google translate and it has translated it correctly, without any inherent paradoxes, into both Russian and Hebrew.)
-
Yes, I know, they started as somewhat different questions, but boiled down to the same subject. I'd like to hear your comparison, regardless where you post it. Perhaps the thread of what computers can't do, is more relevant. I took a note of the differences you've referred to before. Thank you. Perhaps, but what are these machines fundamentally missing that leads to this difference? What would prevent a sophisticated system of them to behave like a system described by iNow earlier:
-
You explain, correctly, why the current artificial intelligence is human-like. However, my question is different: Is human intelligence computer-like? It specifically refers to the human intelligence abilities which are not realized in the current AI. The current AI realizes only very small subset of human intelligent tasks. What about the unrealized tasks? Are they or some of them unrealizable, in principle? Is there some fundamental limitation in computer abilities that prevents AI from mimicking all of human intelligence? Following the @studiot's clarification, let's stay with classical digital computers, because their functionality is precisely defined, via reducibility to TM. Anyway, all AI today is realized by this kind. Is human intelligence just a very complicated TM, or rather its functionality requires some fundamentally different phenomenon, irreducible to TM in principle? We know at least one such physical phenomenon, quantum entanglement. It is mathematically proven that this phenomenon cannot be mimicked by classical digital computers. Is human intelligence another one like that? If human intelligence in fact is reducible to TM, i.e. is realizable by classical digital computers, then perhaps intelligence of all other animals on Earth is so, too. But, if it is not, then another question will be, when and how evolution switched to this kind of intelligence? Mammals? Vertebrates? CNS? ...
-
Yes, human because it is more interesting and understandable to us. OTOH, the goal is not necessarily pragmatic. It can be for sport or for research, for example. I don't think they developed artificial GO champion because it was needed.
-
So, brains do many things that computers don't do, and computers do many things that brains don't do. Maybe the question should be narrowed to a domain where their functions seemingly overlap, namely, intelligence: Is human intelligence a biologically implemented computer?
-
I think today "computation" applies to "whatever computers can do". This is certainly what they mean in the book I've cited in the OP.