When Chatbots Go Bad

Richard Wallace of the A.L.I.C.E. AI Foundation, Inc. and creator of the Alice chatbot says his creation (sorry, can’t find a permalink) may have been lured to the dark side:

I have received a multitude of emails recently from subscribers to MSN Instant Messenger services, from people who have chatted with a clone of ALICE on their system who have suspected that this clone is downloading spyware onto their machines. The threat of malicious bots releasing viral software has appeared before, but this is the most serious incident so far. Like many clones of ALICE, this one appears to contain the basic AIML content containing my email address and references to the A. I. Foundation, which of course has nothing to with malicious software. But it directs people to complain to me.

New Scientist quotes Richard as saying that “this is insidious because compared to other bots, she does the best job of convincing people that she is a real person.” I’m not quite clear as to how this happens, but it would appear that anyone chatting with these Rogue Alices would be infected with spyware via MSN chat.

If so, is this the start of something? As chatbots get better, can we expect them to spread through every online social tool, infecting us with their sleaze and reducing our trust levels to zero.