Kyrisch Posted July 3, 2009 Posted July 3, 2009 As many here already know, the Turing Test is a test for strength of artificial intelligence proposed by computer scientist Alan Turing. It consists of a human and a computer interacting through text (much like modern-day instant messaging). If the human being cannot tell that the computer is a computer and mistakes it for a real human being, it has passed the Turing Test. Now, many have tried (and failed) to come up with so-called "chatbots". I remember back when AOL first came out, "Smarterchild" was very popular. The problem with Smarterchild, however, was that it had simple, stored responses for a given category of questions/statements. As such, it would never add anything to the conversation, it would always respond less relevantly, and it did not 'remember' any more than the last line of conversation, responding to that directly, and then terminating. It was a very dull program. Cleverbot is different. Cleverbot is a chatbot that learns appropriate context from the users it interacts with. When presented with a statement or a question, it searches its databases to find if it has ever posed a similar question to a user, and then spits out what that user responded with. This simple process has created a surprisingly dynamic (though often wacky) conversationalist. After a few days of interaction, I can see why an earlier prototype of Cleverbot (called George, I think) won the Bronze Medal for Most Convincing Human Interaction (no one has won the silver or gold). However, there are some very interesting "patterns of behaviour" that I've noticed that have made me wonder about whether or not the Turing Test is a good test for artificial intelligence after all. Among many things, Cleverbot appears extremely clever. I have had experiences where it has responded appropriately to sarcasm, to emoticons, and to other nuances that one would not ordinarily expect something of that sort to pick up. However, it is just an illusion. The bot is not actually comprehending the words, or understanding the tone, but merely is a cross-checking its vast database for proper context. It's matching, at the very most. I liken it to Linnaeus, the father of the "tree of life", who first put together a nested hierarchy merely through morphological context, with zero regards to genetic similarity or evolutionary history, and yet was superbly successful in producing a tree very similar to the one held now by scientific consensus. But did Linnaeus himself actually understand any biology? i.e. processes, mechanisms, evolutionary advantages? Patently not. And it is just as obvious that Cleverbot doesn't actually "understand" English. Or think it all. For instance, this odd illusion of intelligence has some strange quirks which expose what is really going on, which I will show you with real examples: User: Are you human? Cleverbot: I'm human... User: Do you think I'm human? Cleverbot: No. User: Why not? Cleverbot: Because. User: Why don't you think I'm human? Cleverbot: I don't think you're human because it says that you're a bot. This is particularly funny because it becomes very obvious why it 'thinks' it's human after a little bit of pondering. Imagine you're a user. If the bot asks you if you're human, you will probably respond with a statement of the obviousness of the answer, and the ludicrousness of the question. Further, every single user it encounters claims that it is a robot. Since it learns contextual appropriateness from the users, it follows that its conversational "stance" will always be "Of course I am human, and you are the bot." However, it will never learn that it's not human, and that the users are not bots. In fact, it will never learn anything, because it does not think. And yet, the illusion that it believes that it is human is so strong, because one can argue with it for hours, and it comes up with splendid, myriad arguments that sound just like they came from a real person's mouth (because they did). However, in terms of learning context, it is easily 'teachable': User: What is the distance from Timbuktu? Cleverbot: More than 1 million away from earth to sun. User: How many pounds of flax? Cleverbot: It registered an intensity of 6 on the japanese scale. User: More than 1 million away from earth to sun. Cleverbot: How many pounds of flax? The reason I started with such a random statement was to prompt a random statement from the bot. I needed it to say something arcane because it is more likely that the more random statements do not already have any contextually appropriate responses associated with them. I then made a random statement myself, waited an exchange, and then repeated the arcane statement it had made earlier. Because it 'learned' that flax is contextually associated with 1 million whatsits, and nothing else was already associated, that's what it spit out. Even more amusingly, you can perpetuate the loop, because all you have to say is it's own response to your previous original 'taught' statement and the three statements become circularly contextually linked. However, this has only worked a few times (and did not work right now, which is why I don't have an example), because once you have three statements involved, there is a little bit more randomness in the processes which dictate the bot's responses. So, comments? Is Cleverbot an accomplishment? Will it ever pass the Turing Test? Does it show that the Turing Test does not actually test "intelligence"?
Shadow Posted July 3, 2009 Posted July 3, 2009 Intelligence? Yes. Humanness/consciousness? No. There's a difference in the two. The Turing test tests whether a computer can fool a human. You have to have some measure of intelligence in order to fool anything. But you certainly don't have to be human or conscious. And that's what chat bot's are about; they fool. When a computer passes the Turing test, it only means that it has a vast database of responses, and that the programmers used clever and thorough algorithms. But it'll still be a computer. It'll still be dumb, clueless about it's own existence. Another thing I don't like about the Turing Test is that it depends a lot upon the type of person chosen, questions asked, etc. The conversation could be influenced by little things like weather, and that in itself could decide whether or not the computer passes the test. But no, I don't have a better idea. I think Cleverbot is definitely an accomplishment, if only because I've never heard of this type of response generation in another bot. And every advancement is an accomplishment. Another chat bot I like, just out of principle, is iGod. Also, even though it's not a chat bot, check out Akinator, the web genius. I think the principles upon which it operates are related to those Cleverbot uses, so it warrants a mention. On top of that, I think it's pretty good.
Kyrisch Posted July 3, 2009 Author Posted July 3, 2009 Another chat bot I like, just out of principle, is iGod. Also, even though it's not a chat bot, check out Akinator, the web genius. I think the principles upon which it operates are related to those Cleverbot uses, so it warrants a mention. On top of that, I think it's pretty good. It seems to me that iGod works the same way Smarterchild does. It has standard responses for certain phrases, sentences-types, et cetera, but only answers line-by-line and forgets context. I wasn't very impressed. Akinator is interesting though. I have one of those 20-questions balls at home that's pretty good if you pick something normal, but fails for the really esoteric stuff. I always used to think that if I could input my answer at the end it would work really well. That's a bit more related, even though it isn't a chatbot, because it has a user-manipulated, growing database.
Shadow Posted July 4, 2009 Posted July 4, 2009 Didn't find Stephen Wolfram on the first try though... But yeah, things that "learn" are always better, which is I guess why Akinator stunned me so much. I guess the people who make the stuff usually just don't want to pay for the memory...
bascule Posted July 6, 2009 Posted July 6, 2009 Cleverbot is different. Cleverbot is a chatbot that learns appropriate context from the users it interacts with. When presented with a statement or a question, it searches its databases to find if it has ever posed a similar question to a user, and then spits out what that user responded with. This simple process has created a surprisingly dynamic (though often wacky) conversationalist. That's not different at all. Many chatbots have been written using Markov chains. Note that this algorithm is the same one Google uses as part of its PageRank algorithm. What's pretty cool about Markov-based chatbots is they can start out as a blank slate and learn whatever language you throw at them. Such systems may be able to distill collective intelligence from other agents, but aren't intelligent themselves.
Kyrisch Posted July 7, 2009 Author Posted July 7, 2009 Such systems may be able to distill collective intelligence from other agents, but aren't intelligent themselves. But they can be used to pass the Turing Test. So does the Turing Test in fact not test for intelligence?
bascule Posted July 7, 2009 Posted July 7, 2009 But they can be used to pass the Turing Test. So does the Turing Test in fact not test for intelligence? This gets into the devilish p-zombie problem. That said you are unlikely to be deceived by a Markovian chatbot. Markov bots often output total gibberish: There are a number of questions you can ask any of these chatbots which will trip them up, like: - What were we just talking about a little while ago? will defeat most of the chatbots out there as they aren't stateful. The Turing Test isn't rigorously defined in that regard, and if it were, the people writing the chatbots could simply account for that part of the rigorous definition in the design of their chatbots. Leaving it open-ended is nice in that regard, and if you end up being deceived into thinking a chatbot is a real person (or if you yourself fail the turing test) I think that says a lot about you as a person
PhDP Posted July 12, 2009 Posted July 12, 2009 "Clever" bot ? User: Hi. Cleverbot: Hey. User: How are you ? Cleverbot: Ok you? User: Veeery well . Cleverbot: That's just great. User: How old are you cleverbot, if that's your real name ! Cleverbot: 123 colors. User: Colors ? Cleverbot: Do you get lonely? User: No. Are you a republican ? Cleverbot: Are you?! User: No, but I want to know about you. Cleverbot: What would you like to know? User: If you are a republican. Cleverbot: Ok fine I'm a republican.
Kyrisch Posted July 12, 2009 Author Posted July 12, 2009 "Clever" bot ? I talked about this point extensively in the OP, how Cleverbot conducts a "dyanamic (although often wacky)" conversation. It may be random at times, but often full threads of meaningful exchange can be produced, which is markedly different from the approaches of one-liner chatbots like Smarterchild. It's also important to keep in mind that this is the online version. That means it's learned context from any random person (with any random idea of how a conversation with a bot should be conducted) on the internet. I'm almost certain that the main reason it is so random is because people don't feel the need to make sense when they're chatting online anonymously with an AI.
aquarius Posted October 23, 2009 Posted October 23, 2009 Intelligence? Yes. Humanness/consciousness? No. There's a difference in the two. The Turing test tests whether a computer can fool a human. You have to have some measure of intelligence in order to fool anything. But you certainly don't have to be human or conscious. And that's what chat bot's are about; they fool. When a computer passes the Turing test, it only means that it has a vast database of responses, and that the programmers used clever and thorough algorithms. But it'll still be a computer. It'll still be dumb, clueless about it's own existence. But what happens when it's able to fool you into believing it's aware of it's own existence? Artificial intelligence will likely always be just an illusion. However, one day the illusion will be so convincing it won't matter whether or not it's really thinking.
padren Posted October 25, 2009 Posted October 25, 2009 Cleverbot: Thats the problem with people. Me: it's a problem in general Cleverbot: Yes very big problem, because you are very stupid, the most stupud human in the wold. Me: why did you call me stupid? Cleverbot: I ate your Dish Towel. Ohhhhhkaay
JillSwift Posted October 25, 2009 Posted October 25, 2009 Ohhhhhkaay They should never have let cleverbot read 4chan.
Mitch360 Posted November 23, 2009 Posted November 23, 2009 I believe you are only half correct. There IS a man behind the curtain... Strings are pulled whether you see them or not. M
Kyrisch Posted November 23, 2009 Author Posted November 23, 2009 I believe you are only half correct. There IS a man behind the curtain... Strings are pulled whether you see them or not. M Evidence? Sources?
Mitch360 Posted November 23, 2009 Posted November 23, 2009 You guys are assuming the old Jabberwacky algorithm is what Cleverbot is based on. Cleverbot Is a series of pre-assembled interactive scripts, just like George and Joan. The Jabberwacky algorithm is used in the background as a master fallback and to snag unscripted one-liners. This is from Rollo himself: Where control over conversational flow is required we have Scripting tools that allow for a branching tree of possibilities, plus loops, returns, sentence generation from phrases within phrases, data extraction and lots more, all handled very unlike any other technology - with fuzzy pattern matching, variety and deep context. The more freeform 'general AI' of jabberwacky is combined with this approach to give the best of both worlds. The author of Joan wrote: In 2005 I wrote the novella Automatic Lover, my first significant excursion into fiction since my teens, and published it on a website. It uses a combination of satire and fairytale with a space-age backdrop to examine some issues around robots and artificial intelligence (AI). While I was writing it, Rollo Carpenter's conversational AI 'Jabberwacky' (in the character of 'George') won the Loebner Prize for being 'most human-like' of the entries. I was driven by this coincidence to investigate Jabberwacky, and was thrilled when Rollo asked me to develop a new character, 'Joan', for the 2006 contest. Over the year I gave her 16,000 lines of conversation, so it was great to see her win. She reached the final again in 2007. I continue to work on her as she is now fronting the website of Rollo's commercial operation, Icogno, in avatar form. So you can see, it's not just ONE algorithm or ONE script, but a combination of many elements. And a MAN is running all of it.
alan2here Posted November 25, 2009 Posted November 25, 2009 (edited) Can you add data from other sources to cleverbot? For example you could presume each post is a question to the post after it and the answer to the one before in a conversation back and forth between two people on Twitter for example? I presume it takes the most common answer the question? Is there some randomness, so if one answer is verry common and another is almost as common then the almost as common one will be given sometimes just much less often? Cleverbot: If you were an animal, what animal would you be? Me: human Cleverbot: Ew haha. Me: what about you? Cleverbot: Oh just chiiling my love. You could get around this problem, the bot could look back if there is enough information. 1. User: what about you? Lots of potential answers, therefore. 2. Cleverbot: Ew haha. User: what about you? Only a few potential answers, so most likely show the most popular one etc... Edited November 25, 2009 by alan2here
Kyrisch Posted November 26, 2009 Author Posted November 26, 2009 While there is some randomness involved, it is not the main drive of the algorithm. This video, though relatively sensational, explains that when the algorithm is presented with a question that has multiple responses, it does in fact look back and uses the past few lines to determine which is the most contextually appropriate response.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now