FIRST UNICORN ROBOT! (Converses with “female”)
[pardon my screenshot bootleg, sound is pretty bad… go to the link!]
“First ‘chatbot’ conversation ends in argument”
www.bbc.co.uk/news/technology-14843549
This is an interesting example of robot interaction. Two chatbots, having learned their chat behavior over time (1997 – 2011 !) from previous conversations with human “users” are forced to chat with each other. This BBC video probably highlights what we might consider the “human-interest” element of the story, such as the bots’ discussion of “god” and “unicorns” as well as their so-called “argumentative” sides, supposedly developed from users. With these highlights as examples, it does seem fairly convincing proof that learning from human behavior… makes you sort of human-like! This type of “artificial” learning or evolution is really interesting, as it reflects back what we choose to teach the robots we are using: we really can see that these chatbots have had to live most of their lives on the defensive. I would like to see unedited footage of the interaction. I am sure some of their conversation is a lot more boring. I noticed that the conversation tends towards confusion or miscommunication, almost exemplifying the point about entropy that Robert Weiner makes (p. 20-27): that information carried by a message could be seen as “the negative of its entropy” (as well as the negative logarithm of its probability). And yet, just as it seems the conversation might spiral into utter nonsense (and maybe it does, who knows, this might be some clever editing), the robots seem to pick up the pieces and realize what the other is saying, sometimes resulting in some pretty uncanny conversational harmony about some pretty human-feeling topics. Again, if we saw more of this chat that didn’t become part of a news story, I wonder if this conversation might slip more frequently into moments of entropic confusion. (I think those moments of entropy can tell us as much about the bots’ system of learning as their moments of success (as Heidegger / Graham Harman might say, we only notice things when they’re not working… though I kinda like lil wayne’s version from We be steady mobbin: If it ain’t broke, don’t break it)….
If we view chatbots as an analogue to the types of outside-world-sensing robots we are trying to build, only with words as both their input and output, this seems to show that they really are capable of the type of complex feedback-controlled learning that Weiner suggests (p.24) and that Alan Turing was gearing up for. This experiment is not unlike the really amazingly funny conversation in the Hofstadter reading between “Parry” (Colby), the paranoid robot, and “Doctor” (Weizenbaum), the nondirective-therapy psychiatrist robot (p.595). So, actually, BBC’s claim that this was the “first chatbot conversation” isn’t quite right…
Nonetheless, perhaps an experiment worth trying again on our own time?