Jump to content

Recommended Posts

Posted

You can actually try this language model online: Playground (textsynth.com)

I've tried this conversation:

My name is John. I am 34 years old. I was walking in the wood two days ago, and suddenly rain started falling. And yesterday the same thing happened.

Q: Was John wet yesterday?

And the system has answered:

A: Yes, he was.

I've found it very impressive. However, then I've modified the conversation a bit:

My name is John. I am 34 year old. I was walking in the wood two days ago, and suddenly rain started falling. Yesterday this didn't happen.

Q: Was John wet yesterday?

And it has answered:

A: Yes.

Somewhat disappointing...

 

 

Posted

This survey,[2205.00965v1] State-of-the-art in Open-domain Conversational AI: A Survey (arxiv.org), identifies some of the challenges even in the most advanced natural language processing AI systems, including:

1. Poor coherence in sequence of text or across multiple turns of generated conversation.
2. Lack of utterance diversity.
3. Bland repetitive utterances.
4. Lack of empathetic responses from conversational systems.
5. Lack of memory to personalize user experiences.
6. Style inconsistency or lack of persona.
7. Multiple initiative coordination.
8. Poor inference and implicature during conversation.
9. Lack of world-knowledge.
10. Poor adaptation or responses to idioms or figurative language.
11. Hallucination of facts when generating responses.
12. Obsolete facts, which are frozen in the models’ weights at training .
13. Training requires a large amount of data.
14. Lack of common-sense reasoning.
15. Large models use so many parameters that make them complex and may impede transparency.
16. Lack of training data for low-resource languages.

Posted
20 minutes ago, Genady said:

This survey,[2205.00965v1] State-of-the-art in Open-domain Conversational AI: A Survey (arxiv.org), identifies some of the challenges even in the most advanced natural language processing AI systems, including:

1. Poor coherence in sequence of text or across multiple turns of generated conversation.
2. Lack of utterance diversity.
3. Bland repetitive utterances.
4. Lack of empathetic responses from conversational systems.
5. Lack of memory to personalize user experiences.
6. Style inconsistency or lack of persona.
7. Multiple initiative coordination.
8. Poor inference and implicature during conversation.
9. Lack of world-knowledge.
10. Poor adaptation or responses to idioms or figurative language.
11. Hallucination of facts when generating responses.
12. Obsolete facts, which are frozen in the models’ weights at training .
13. Training requires a large amount of data.
14. Lack of common-sense reasoning.
15. Large models use so many parameters that make them complex and may impede transparency.
16. Lack of training data for low-resource languages.

Looks very similar to the description by Alan Turing in his essay  "Can a Machine think ?"

Posted
9 minutes ago, studiot said:

Looks very similar to the description by Alan Turing in his essay  "Can a Machine think ?"

And the answer is, No.

Another little test in the Playground (see OP):

Alice: That's telephone.

Bob: I'm in the bath.

Alice: OK

Q: Who answered the telephone ?

A: Bob

Posted

Chatbots are LOL-kind of A.I.

You could try to talk to in-game A.I. bot as well..

ps. Even a real human can make mistakes in understanding what another human is asking for..

Posted
4 minutes ago, Sensei said:

ps. Even a real human can make mistakes in understanding what another human is asking for..

Sure. But not in cases that were tested.

One more:

A's father is B and her mother is C.

Q: Who is C's daughter?

A: A's mother.

Posted
1 minute ago, Genady said:

Sure. But not in cases that were tested.

There are billions of people who do not understand, or not enough understand, the English language.

Use a word that the chatbot doesn't know (there are millions of words... in my language 2.6 million+ variations) and the chatbot will have problems (average human, too)..

Posted
1 minute ago, Sensei said:

There are billions of people who do not understand, or not enough understand, the English language.

Use a word that the chatbot doesn't know (there are millions of words... in my language 2.6 million+ variations) and the chatbot will have problems (average human, too)..

Yes. But again, it doesn't apply to the tests in question. These chatbots were trained in English (such as GPT-3), and the tests use words from the trainsets.

The problem is not in the language. IMO, the problem is that the chatbots (or rather their creators) assume that the answers are in the language.

Posted (edited)
2 minutes ago, Genady said:

IMO, the problem is that the chatbots (or rather their creators) assume that the answers are in the language.

So, to crack such an algorithm, all you have to do is ask an enough hard math question..

 

Edited by Sensei
Posted
Just now, Sensei said:

So, to crack such an algorithm, all you have to do is ask an enough hard math question..

 

That would certainly work. But they fail without a need to go to such an extent.

Posted
2 hours ago, studiot said:

The answer to what ?

 

I didn't ask a question, nor did you.

Sorry, you're correct. I meant, the answer to the Alan Turing's question, Can a machine think?

I've implicitly agreed with your observation that the description in his essay is similar.

Posted
22 minutes ago, Genady said:

Sorry, you're correct. I meant, the answer to the Alan Turing's question, Can a machine think?

I've implicitly agreed with your observation that the description in his essay is similar.

Thank you for the clarification.

When speaking to wooden planks like myself it helps to say exactly what you mean.

😉

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.