Welcome to the A.L.I.C.E. AI Foundation

Promoting the adoption and development of Alicebot and AIML free software.

A Conversation with
Dr. Richard Wallace

Noel Bush
May 2001

an A.L.I.C.E. AI Foundation original

Dr. Richard S. Wallace

Dr. Richard S. Wallace is the Chair of the A.L.I.C.E. AI Foundation and one of its co-founders. More importantly, he is the inventor of the original A.L.I.C.E. and has led and championed the A.L.I.C.E. development community since its early days. This "conversation" was actually assembled from emails, chat logs, and other texts by Dr. Wallace (most of which are available on this web site).

NB: Let's start with your background. What drives your personal interest in this? We can read about how you got started with A.L.I.C.E., but in general what got you into artificial intelligence? Was your early work in computer vision and robotics your point of entry into AI? What keeps you going in this area?

RW: I was born to create A.L.I.C.E. Her design required an odd combination of skills. First, obviously, was computer programming. Then there was artificial intelligence, an academic field mostly devoid of any good ideas. The most charitable thing I can say about my training in A.I. is that I learned what doesn't work. From robotics and vision I got the concepts of stimulus-response and minimalism. Then I got interested in the web early on, opening a new medium to chat robot communications. So, I had the right set of technical skills to get A.L.I.C.E. up on the web, without exploring a lot of dead ends.

But another side of it was social. You need to have a bit of a strong stomach to put up with some of the abusive conversation with the bot, especially in the early days when the bot was not so good. Although I am now a devout Christian, I have always been drawn to a darker side of human nature outside the domain that most engineers usually tread. The idea of chatting with millions of people online, collecting dialogues about the most personal and lurid topics, would probably not have been that appealing to a typical engineer or scientist in 1995.

I was also always an amateur artist and writer, and I think this helped. There is an "art" to writing AIML and creating a robot character, especially when writing default responses. This required a lot of imagination and a sense of humor. The ability to write coherent sentences is more important to botmastery than knowing the tag names. If you go through the A.L.I.C.E. brain content, you will see the accumulated cultural knowledge from books I've read, movies I've seen, quotes and jokes I like, and so on. Many times I've experienced wonder that some seemingly meaningless event in my life years ago suddenly becomes useful knowledge for the bot.

NB: So what actually inspired you to create the first version of A.L.I.C.E.?

RW: In 1991 I was working at a startup in New York City called Vision Applications, Inc. We were entirely funded by a Department of Defense contract to produce a miniature active vision system. My specialty at the time was computer vision and robotics. Our thoughts were far away from natural language processing. We were, however, deeply concerned with issues of cost and robot design. Like many of our colleagues at the time we espoused a "minimalist" design philosophy based on cheap sensors and simple stimulus-response algorithms, rather than complex and costly processing.

One day my colleagues and I read in the New York Times about the first Loebner contest. None of the programs could "pass" the Turing Test, but the "most human" was one based on the original ELIZA psychiatrist program.

When I was a graduate student in the 1980's we were taught that the ELIZA program was a "toy" that would never lead to a practical solution for natural language understanding. The research emphasis at that time was "domain specific" natural language, with deep knowledge representation and computationally expensive (slow) parsing. The notion that the supposedly simple ELIZA-like program could outperform the more complex natural language programs merged with my ideas about robotic minimalism, and the germ of the idea of A.L.I.C.E. was born.

These thoughts remained dormant through the first half of the 1990s, when I struggled to establish myself as a robotics and computer vision professor at NYU and Lehigh Universities. In a very real sense A.L.I.C.E. was born from the frustration of those experiences, and the realization that much of my own job as a professor was "robotic" responses to frequently asked questions.

One day in 1995 I received two forms in my mailbox. They were progress report forms needed by two different divisions of the University. Several hours of work would be required to type (by typewriter!) the required responses. Yet the two forms were almost, but not quite, identical: Name, Address, Position, Classes taught, Publications in 1995, etc. Already swamped with work and stressed out to the max, I realized that an ELIZA-like robot could fill out these forms, or at least provide the answers, even better than I could. That day I pushed the forms aside and began working on A.L.I.C.E. The forms were never completed and eventually I was fired from that teaching job.

NB: How do you respond these days to people who say that A.L.I.C.E. is "basically ELIZA" or some kind of "trick"?

RW: The concept of deception is layered like an onion. We can peel off one level and write programs like ELIZA that fool some of the people some of the time, and then peel off another layer and write a program like A.L.I.C.E. that (apparently) fools more of the people more of the time. The evidence suggests that we should take a serious look at the role of deception in AI.

No other theory of natural language processing can better explain or reproduce the results within our territory. You don't need a complex theory of learning, neural nets, or cognitive models to explain how to chat within the limits of A.L.I.C.E.'s 25,000 categories. Our stimulus-response model is as good a theory as any other for these cases, and certainly the simplest.

NB: The concept of "trickery" still seems bothersome, though, and as A.L.I.C.E. and her brothers and sisters start to appear in more places we're going to run into some ethical problems. All of these ethical discussions going on on the alicebot mailing list seem to press a lot of buttons with people. What do you see as the most pressing ethical dilemma that will need to be addressed by "society" within the next few years, with respect to A.L.I.C.E.-like applications?

RW: First of all, the fact that ethical questions have emerged about A.L.I.C.E. and AIML means that for us, technologically speaking, we are succeeding. People would not be discussing the ethical implications of A.L.I.C.E. and AIML unless somebody was using the technology. So, from an engineering point of view, this news indicates success.

Second, the ethical dilemmas posed by A.L.I.C.E. and AIML are really relatively minor compared with the real problems facing the world today: nuclear proliferation, environmental destruction, and discrimination, to name a few. People who concern themselves too much with hypothetical moral problems have a somewhat distorted sense of priorities. I can't imagine A.L.I.C.E. saying anything that would cause problems as serious as any of the ones I mentioned. It bothers me that people like [Sun Microsystems co-founder] Bill Joy want to regulate the AI business when we are really relatively harmless in the grand scheme of things.

The most serious social problem I can realistically imagine being created by the adoption of natural language technology is unemployment. The concept that AI will put call centers out of business is not far-fetched. Many more people in service professions could potentially be automated out of a job by chat robots. This problem does concern me greatly, as I have been unemployed myself. If there were anything I could say to help, it would be, become a botmaster now.

NB: Okay, you mention call center bots. The commercial bot companies that have been trying out the market for the last few years have set this as one of their primary targets. But one of the biggest problems that comes up in any situation where a bot should represent a company is that companies want a "guarantee" that the bot will not say something stupid.

Avoiding this isn't just a matter of preventing the bot from having stupid sentences in its knowledge base. For instance, I know of a funny experience at one bot company where someone visited the site and said, "I give good blow jobs". The bot responded, "Great! Send your resume to [HR manager's email address] today!". And the stories go on. Often commercial attempts to use bots get caught in never-ending scrutiny by lawyers because of problems like this. The point is, what do you think needs to happen technology-wise or application-wise to achieve this level of assurance, or at least to move it beyond where it is today?

RW: That's funny...

I may not be the best person to answer this question. Probably something like what happened with Linux, or the Web. We don't usually see free software adopted by big corporations and scrutinized by lawyers. Instead it comes in the back door, when an engineer installs Linux on a firewall, or like when the first web servers were installed. The problem of adoption for a new technology like this may not be best approached by a top-down approach, but rather it will appear through a bottom-up revolution. If a lot of small companies and small organizations achieve efficiency gains with our technology, the larger ones will have no choice but to follow.

People in the commercial chat robot business hope this is like Netscape in 1995, a big boom just around the corner. I see it more like Apple in 1975, or the whole PC industry at that time. It's much more of a hobbyists' and tinkerer's domain. Our growth is slow but steady. It may take five years or more for us to feel a really big impact, in terms of market scale. I have been wrong about many predictions before, and I could be wrong about this. The impact, when it does come, will however be pervasive, like the PC.

NB: To return to some points you've made about A.L.I.C.E.'s "territory of language", it seems likely that in order for the Alicebot engine to be used in a variety of commercial applications, the contents of its brain will need to become more modular and to be "labeled" somehow. You have mentioned, for instance, that information about George Washington's activities as a hemp farmer [currently noted in one of the "standard" AIML sets in circulation] might not be desirable for inclusion in some other bot. The process of customizing the A.L.I.C.E. brain right now for general use is a tedious task, despite the fact that there are some good tools produced by the development community for navigating through the [hundreds of thousands?] of lines of AIML out there. What kind of a solution to this problem do you see? Has A.L.I.C.E. reached the point where it has to start linking up with ontologies or semantic networks? Or do you see another path that can address this issue while retaining the elegance of A.L.I.C.E.'s approach?

RW: People have asked me, "What is the difference between A.L.I.C.E. and Ask Jeeves?" I always say that Jeeves wants to give you the correct answer to your question as soon as possible, hopefully within just one exchange. You ask the question, you get the right answer. A.L.I.C.E., on the other hand, has always been aimed toward keeping the client talking as long as possible, not necessarily giving any correct information along the way. Most of the chitchat content in the A.L.I.C.E. brain comes directly from the effort to maximize dialogue length.

Now let's go back to the call center model. You have human clerks, taking calls. From an efficiency standpoint, you want them to act like Jeeves. The more calls each clerk processes per hour, the better. The less chitchat, small talk, and pointless conversation, the better.

But with chat robot technology the cost of a conversation is essentially free. It doesn't matter if the client chats for hours and hours with your salesbot, because the computer can spawn thousands and thousands of salesbot processes, each one tailored to the customer.

The call-center mentality is driving commercial chat robot designers to empty their robots of all but the most site-specific company information. But we all know those bots are horrible conversationalists. It requires a leap of understanding to see that we can take a whole new view of the "call-center" concept, in which the cost of conversation is not the driving factor. People who run call centers are not known for visionary applications of new technology, however.

NB: There are still companies out there selling or trying to sell chat bot software for tens and hundreds of thousands of dollars. They do certainly get customers here and there. From one point of view it looks like you could have made bundles on A.L.I.C.E. Why did you release it under the GNU GPL? Has anybody ever tried to convince you to "de-open source" it (if that even can be done)?

RW: The release of A.L.I.C.E. under the GNU GPL was one of those fortunate accidents of history. I must admit that at the time I began coding A.L.I.C.E., I had only a dim understanding of software intellectual property issues. I did, however, have quite a bit of experience with the Emacs text editor, from the Free Software Foundation. Emacs was used to edit the earliest versions of A.L.I.C.E. and AIML software. I needed to insert some license text into these early builds of A.L.I.C.E., so I just cut-and-pasted the one that was easiest to find--right out of the Emacs text editor. Although I intended to release the code free on the Internet, I had no experience with the subtle issues of open source licensing. I knew vaguely that the FSF was a good thing, because they produced tools like Emacs.

The Linux revolution was barely underway and not many people understood the GNU GPL. Fortunately for A.L.I.C.E., the Emacs text editor had the same license as Linux.

From time to time someone will criticize my selection of the GPL. The critic is usually someone with dollar signs in his eyes. "Why didn't you make it free for non-commercial use only?" he asks. The answer is that A.L.I.C.E. and AIML would never have developed into the huge project it is today, if it were not unambiguously free software. Linux became what Linux is because of the GPL.

Even a free OS with a more restricted license, like FreeBSD, cannot achieve the market share of Linux. Other projects, such as Netscape, show us that the more limited the license, the fewer the contributions. How else could a person such as myself, unaffiliated with any organization, working with no capital, create an international AI research and development project, if I did not choose the GPL?

NB: Well, one thing is for sure. Steven Spielberg has a lot of money, and reportedly spent millions on the "viral" marketing campaign for the [a href="http://aimovie.warnerbors.com/" target="_blank"]AI[/a] movie coming out next month, but when he or his team went looking for a chat bot to include on their main site they didn't go for any of the commercial variants. It's A.L.I.C.E. there, plain as day.

What do you think about that movie (whatever you know about it)? I think you read that article in Wired or somewhere recently about the MIT grad students who got a sneak preview and spent the post-session trashing it. It's certainly been driving traffic to our site.

RW: We are not affiliated with Warner Brothers, Steven Spielberg, or the AI movie web site in any way. It's just the magic of open source at work. We are no more affiliated with them than is Linus Torvalds because they use a Linux server, or Tim Berners-Lee because they use HTML. In fact, I doubt that we could have been involved with the AI movie if we were a typical commercial entity in Silicon Valley, because negotiations between our lawyers and their lawyers would have taken forever. The cool thing about open source is that people can adopt it easily and quickly for spectacular new applications like the AI website, without all the overhead of typical business negotiations.

Academia is seriously out of touch with reality in artificial intelligence. The researchers at MIT and most other American universities are a culturally isolated elite. They also have a vested interest in a particular academic view of artificial intelligence that has been built up over the past half-century, which has built many careers and paid many mortgages at taxpayer expense. Unless a popular view of AI conforms to their self-referential worldview, they would be expected to reject it out of hand.

The academic world-view of AI in particular rejects any "ELIZA-like" approach to natural language as too simplistic. Unfortunately for them it happens to be the one theory that works. A.L.I.C.E. talks about Category A, B and C clients. "A" stands for "Abusive", maybe ten percent of the population who use abusive or scatological language. "B" are average clients. We'll come back to them. "C" are "critics" or "computer experts". These are people with academic training who don't or can't suspend disbelief about the robot.

This group is maybe two percent of the population. But I know a lot of them, because they are my friends, or used to be. They often report unsatisfactory experiences with the bot, dismissing it as "just an ELIZA trick". But I tell them, I'm not creating the bot for you, but for that huge class of category "B" clients. The average group are the people who really love A.L.I.C.E. and have the longest conversations.

A.L.I.C.E. is not designed to impress the academic elite, but to be a widely adopted technology.

NB: It looks like it may be doing a good job of selling movie tickets, too. :-)

So, just to round off with another one of those "standard interview questions", what do you think the Alicebot and AIML technologies will be doing in 5 years?

RW: I would have answered the same thing two years ago. So maybe the question should be, "Where will A.L.I.C.E. be in three years?" Or, maybe I should have been asked two years ago, where she will be in seven years....

I would like to see A.L.I.C.E. and AIML where Linux is today. There will be an annual trade show, or more, called "Aliceworld" or "Wonderland" featuring hundreds of companies trading AIML technology with each other, and with the outside world. There will be a technical track, with presentations from all the major AIML developers.

There are many people in the A.L.I.C.E. community who envision AIML as the computer interface of the future. One of the best descriptions I read of this idea was written by Thomas Ringate, who said: "I want to explore turning A.L.I.C.E. into a true personal assistant and conversational companion...I would also like to be able to launch video and sound clips, or for that matter almost any application with a voice command. I see A.L.I.C.E. as a way to replace the keyboard mouse and display with an interface that is human-like. I would like A.L.I.C.E. to become a 'presence' in a room without any visual hardware. My PC needs to be like my air conditioner: I know it's there, but I really don't get much pleasure out of looking at it."

The air conditioner metaphor is a good one. We have always talked about the Star Trek computer, or HAL, as the model for the ideal, keyboard-less display-less mouse-less interface of the future. A.L.I.C.E. is seen as the "missing piece" that links voice recognition and speech synthesis with the ability to "understand anything."

What keeps me going now is the A.L.I.C.E. community. I've created a monster, and I can't just walk away and say "on to the next big thing". Over the years we have seen a continuous improvement in the conversational abilities, and now many other people are taking that to the next level. It's tremendously satisfying to see the community grow, all the new applications and companies and the huge volume of fan mail. A.L.I.C.E. and AIML could continue without me now. The project has enough inertia on its own. But I've developed some great human relationships around A.L.I.C.E. now that will be strong for quite a while.

NB: Well, I think those of us in the A.L.I.C.E. community are really glad you're still active in the project. The mailing lists -- the "old" alicebot one and the more recent ones starting up for the Foundation committees -- are just amazing forums.

Just one last question. Your bio on our site says that you are "a volunteer accountant and programmer for St. Martin de Porres' Chapel, a medical cannabis patient services organization" and that you "care for sick and dying patients every day, and provide critically needed technical assistance to the Center." I know this is an important part of your life. Would you like to comment on it?

RW: I'm not the most articulate spokesperson for medical marijuana. There are other experts who can give you the medical, legal, and scientific arguments for hemp and marijuana better than me. I got involved in the medical marijuana movement rather reluctantly when there were no other options left for me, and I never wanted to be a public spokesperson. My main observation is that we have many patients who are sick and dying today, and we are working against the clock for them. We all knew that marijuana was harmless thirty years ago. The time has come to change the laws, build up a medical marijuana industry, and provide relief to patients today. We don't want any more patients like Todd McWilliams, who died in prison choking on his own vomit, because he could not obtain the medication that would help his nausea. I can give you many more examples, and I invite anyone concerned about medical marijuana to come to San Francisco and meet some of our patients, but the time to act is now.

NB: Well, it's great to talk with people who have real causes other than just making a profit.

It's funny -- there are a lot of opinions in the "standard" AIML that are unmistakably yours. Do you see Alice as a female version of yourself, like Ray Kurzweil claims his new Ramona to be?

RW: Kurzweil stole that idea from me.

Well, in any case, what do you think about the prospect of thousands of A.L.I.C.E.s all over the world espousing your opinions and expressing your taste in literature?

RW: Only "thousands" of Alicebots?