Genady Posted December 7, 2022 Posted December 7, 2022 I've asked ChatGPT a question and got an answer, which is correct, but ... Here it is: Why doesn't it consider B herself?
studiot Posted December 7, 2022 Posted December 7, 2022 As I see it there are several possibilities not considered, either in the question or in the answer. My (correct) answer is: Â Since A's mother has not been declared alive, no one is a possible answer if A's mother is now deceased. Â
TheVat Posted December 7, 2022 Posted December 7, 2022 Good question. I don't know why we should assume B is male. Or that, another possibility, B could not be a female of reproductive age who had a child on her own using artificial insemination or IVF. As for deceased, my mother is still my mother though she is deceased. I hope the linguistic basis for this is clear: mother is a term that defines a relationship, even if it was in the past.
Genady Posted December 7, 2022 Author Posted December 7, 2022 Also, we should not assume that A and B are necessarily persons. If they are, say, cells or some asexually reproducing organisms, then B is the only answer. Anyway, by the Occam's principle, with the data given, B is the best answer, isn't it? Update. With the follow-up questions, this AI becomes ridiculous and self-contradictory:
studiot Posted December 8, 2022 Posted December 8, 2022 10 hours ago, TheVat said: Good question. I don't know why we should assume B is male. Or that, another possibility, B could not be a female of reproductive age who had a child on her own using artificial insemination or IVF. As for deceased, my mother is still my mother though she is deceased. I hope the linguistic basis for this is clear: mother is a term that defines a relationship, even if it was in the past. Well, as I said, I disagree, though it is a fine linguistic point. The correct tense to use would be past not present. However we all seem agreed that there are plenty of different possibilities.
joigus Posted December 8, 2022 Posted December 8, 2022 I agree that there are fine linguistic points to be made, involving among other things, whether we are allowed to extend the possibilities to cells, or to deceased people, etc. But language, ordinary language, has a lot of context attached to it that results in the answering party filtering out possible answers that probably are not relevant to what the asking party wants to know. I would therefore address the apparent "bug" that the system ignores the obvious answer, provided B is a woman, which is what I find most interesting. My guess would be that AI systems learn by experience, and we in our roles of experience-based learning machines --and AI engines try to mimic us in a way-- rarely are fed questions of which the answer is implied in the question, so the system has not been fed enough statistics to face a situation in which the answer is implicit in the question itself. Or not often enough. In Spanish we have this joke --that you normally play on kids-- of asking "What colour is St James' white horse?" My father was particularly fond of "Who's Zebedee's daughters' father?" Kids do not expect the answer to be implied by the question, so sometimes get confused. Maybe AI systems can suffer from some version of this glitch that seems to be rather based on what you expect a question to be about than on a clean logical parsing of said question. And the reason may well be that the AI engine, as kids do too, bases its "expectations" on previous experience, and thus approaches the question based on these "expectations."
Genady Posted December 8, 2022 Author Posted December 8, 2022 I like this guess, @joigus. Here is a little evidence supporting it: 1
joigus Posted December 8, 2022 Posted December 8, 2022 (edited) 23 minutes ago, Genady said: I like this guess, @joigus. Here is a little evidence supporting it: Well done! You've just conducted an experiment to test the hypothesis. The chat engine is clearly assuming something --B's sex-- that's not literally implied by the question. It seems as though the system is assuming the answer must be based on a syllogism, not a "loop," or a truth to be derived from the question itself. It's good to have you back, BTW. I wonder if there's a way to guarantee that's what's going on here. Edited December 8, 2022 by joigus minor correction
studiot Posted December 8, 2022 Posted December 8, 2022 1 hour ago, Genady said: I like this guess, @joigus. Here is a little evidence supporting it: Going back to the original.  In this modern day and age of AI's, surely AI's (along with everybody else) should be asware that B may not be deceased, but simply no longer a woman ? Furthermore in many countries the terms husband and wife are now blurred by same sex marriages. So my comment about was still stands 16 hours ago, Genady said: I've asked ChatGPT a question and got an answer, which is correct, but ... Here it is: Why doesn't it consider B herself?   By the way can somebody enlighten me as to what is CHATGPT please?
Genady Posted December 8, 2022 Author Posted December 8, 2022 @studiot, there are thousands of articles about ChatGPT, here is one from the horse's mouth:Â ChatGPT: Optimizing Language Models for Dialogue (openai.com)
Genady Posted December 8, 2022 Author Posted December 8, 2022 @joigus, it doesn't look like a result of logical assumptions, because on one hand, it derives truth from the question itself in this example: and on the other hand, it is incapable of a simple syllogism in this example: Â
joigus Posted December 8, 2022 Posted December 8, 2022 2 hours ago, Genady said: @joigus, it doesn't look like a result of logical assumptions, because on one hand, it derives truth from the question itself in this example: and on the other hand, it is incapable of a simple syllogism in this example: Â But I didn't mean that it derives its conclusions from pure logical assumptions. I meant the opposite: That there's an apparent element of empiricism, as is to be expected from a machine that learns from experience: 5 hours ago, joigus said: My guess would be that AI systems learn by experience, [...] Â
Genady Posted December 8, 2022 Author Posted December 8, 2022 Yes, @joigus, the experience, i.e., the training statistics emphasis of your hypothesis seems to me a right way to analyze this behavior. It was the following specification that looks unsupported: 5 hours ago, joigus said: It seems as though the system is assuming the answer must be based on a syllogism, not a "loop," or a truth to be derived from the question itself. Â
studiot Posted December 8, 2022 Posted December 8, 2022 5 hours ago, Genady said: @studiot, there are thousands of articles about ChatGPT, here is one from the horse's mouth:Â ChatGPT: Optimizing Language Models for Dialogue (openai.com) Thanks for the info. Â So am I right in assuming that Your red box denotes an input question and your green box denotes the AI response ? Â It seems to me that the AI is conditioned to always give an answer, unlike a human. Isn't this a drawback ?
Genady Posted December 8, 2022 Author Posted December 8, 2022 (edited) @joigus I guess that the crucial difference between a human and the ChatGPT's experiences is in their context: the latter is an experience of language, while the former is an experience of language-in-real-life. For example, we easily visualize a daughter and her mother, and in this mental picture the mother is clearly older than the daughter. The ChatGPT instead knows only how age comparison appeared in texts.  27 minutes ago, studiot said: Thanks for the info.  So am I right in assuming that Your red box denotes an input question and your green box denotes the AI response ?  It seems to me that the AI is conditioned to always give an answer, unlike a human. Isn't this a drawback ? Yes, you're right: the red box denotes what I say and the green one denotes what AI says. No, sometimes it says that it cannot answer, with some explanation why. Edited December 8, 2022 by Genady
joigus Posted December 8, 2022 Posted December 8, 2022 1 hour ago, Genady said: Yes, @joigus, the experience, i.e., the training statistics emphasis of your hypothesis seems to me a right way to analyze this behavior. It was the following specification that looks unsupported: Â Oh, I see. "Assuming a syllogism" was a bad choice of words. With this "assuming a syllogism" I was referring to the illusion it creates, IMO. But the system is not thinking logically, at least not a 100% so. The only logic is a logic of "most trodden paths" so to speak. I may be wrong, of course. Perhaps modern AI implements modules of propositional logic in some way. I'm no expert. đ I liked your "experiments" anyway.
studiot Posted December 8, 2022 Posted December 8, 2022 1 hour ago, Genady said: Yes, you're right: the red box denotes what I say and the green one denotes what AI says. No, sometimes it says that it cannot answer, with some explanation why. Noted thanks. 1 hour ago, joigus said: I liked your "experiments" anyway. Yes, I am watching with interest and learning lots as I don't really know much about AI.
Genady Posted December 9, 2022 Author Posted December 9, 2022 Not all is bad in the ChatGPT world. Look at this:
TheVat Posted December 9, 2022 Posted December 9, 2022 So AI can now compose dull poetry. That said, the line discussions and debates that never end seems uncannily accurate! đ    Â
Genady Posted December 9, 2022 Author Posted December 9, 2022 11 hours ago, TheVat said: So AI can now compose dull poetry. That said, the line discussions and debates that never end seems uncannily accurate! đ     We can conclude that this is a generic feature of science forums because this guy doesn't know anything about scienceforums.net specifically:
Prometheus Posted December 9, 2022 Posted December 9, 2022 Has anyone tried repeating the same question multiple times? If chatGPT works in a similar manner to GPT3 it's sampling from a distribution of possible tokens (not quite letters/punctuation) at every token. There's also a parameter, T, which allows the model to preferentially sample from the tails to give less likely answers.
Genady Posted December 9, 2022 Author Posted December 9, 2022 53 minutes ago, Prometheus said: Has anyone tried repeating the same question multiple times? If chatGPT works in a similar manner to GPT3 it's sampling from a distribution of possible tokens (not quite letters/punctuation) at every token. There's also a parameter, T, which allows the model to preferentially sample from the tails to give less likely answers. Yes, I have. The answers differed in wording, not in content.
iNow Posted December 10, 2022 Posted December 10, 2022 8 hours ago, Prometheus said: Has anyone tried repeating the same question multiple times? If chatGPT works in a similar manner to GPT3 it's sampling from a distribution of possible tokens (not quite letters/punctuation) at every token. There's also a parameter, T, which allows the model to preferentially sample from the tails to give less likely answers. I had it write me a resume today as a test. Just told it what job and level it was for and that I wanted my resume to successfully get through the AI screening programs recruiters today so often use before actually looking at the submissions. It was solid. 1
studiot Posted December 10, 2022 Posted December 10, 2022 9 hours ago, iNow said: I had it write me a resume today as a test. Just told it what job and level it was for and that I wanted my resume to successfully get through the AI screening programs recruiters today so often use before actually looking at the submissions. It was solid. interesting. +1
Genady Posted December 11, 2022 Author Posted December 11, 2022 (edited) On 12/9/2022 at 10:46 PM, iNow said: I had it write me a resume today as a test. Just told it what job and level it was for and that I wanted my resume to successfully get through the AI screening programs recruiters today so often use before actually looking at the submissions. It was solid. This article lists "best" uses for ChatGPT, and the last one is similar to what you did, I think. It also links to another article, about ChatGPT's limitations. The 5 Best Uses (So Far) for ChatGPT's AI Chatbot (cnet.com) Edited December 11, 2022 by Genady
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now