Welcome to the A.L.I.C.E. AI Foundation

Promoting the adoption and development of Alicebot and AIML free software.

PNAMBIC

Dr. Richard S. Wallace

Dr. Richard S. Wallace

PNAMBIC /p*-nam'bik/

[Acronym from the scene in the film version of "The Wizard of Oz" in which the true nature of the wizard is first discovered: "Pay no attention to the man behind the curtain."] 1. A stage of development of a process or function that, owing to incomplete implementation or to the complexity of the system, requires human interaction to simulate or replace some or all of the actions, inputs, or outputs of the process or function. 2. Of or pertaining to a process or function whose apparent operations are wholly or partially falsified. 3. Requiring prestidigitization.

The ultimate pnambic product was "Dan Bricklin's Demo", a program which supported flashy user-interface design prototyping. There is a related maxim among hackers: "Any sufficiently advanced technology is indistinguishable from a rigged demo." See magic, sense 1, for illumination of this point.

The Jargon File, [a href="http://www.tuxedo.org"]http://www.tuxedo.org[/a]

ALICE was not the original name for ALICE. She was first called PNAMBIC, an homage to the role of deception in the history of artificial intelligence. The machine first used to host PNAMBIC was however already named "alice", so clients started to refer to her as "Alice" from the beginning. Later we chose the "retronym" Artificial Linguistic Internet Computer Entity to fit the new name.

The element of trickery or deception has dogged the history of AI since Baron Kempeln toured the capitals of Europe with his "mechanical" chess playing automaton, actually powered by a small but gifted person hidden inside the contraption. Scandalous stories of bogus AI demos and remote-controlled "autonomous" robots tarnish the reputation of some the greatest reseach institutions. One natural langauge researcher gave a demo in the 1980's to a group of Texas bankers, who were surprised to find the robot consistently answering the next question about to be asked.

Cutaway of von Kempelen's Chess Playing Automaton showing the human operator inside

Cutaway of von Kempelen's "Chess Playing Automaton" showing the human operator inside

ELIZA has also been called a hoax or deception. But the concept of deception is layered like an onion. We can peel off one level and write programs like ELIZA that fool some of the people some of the time, and then peel off another layer and write a program like A.L.I.C.E. that (apparently) fools more of the people more of the time. The evidence suggests that we should take a serious look at the role of deception in AI.

Turing himself perceived this onion-layerlike property of deception whe he wrote: "Whenever one of these [AI] machines is asked the appropriate critical question and gives a definite answer, we know that this answer must be

wrong, and this gives us a certain feeling of superiority. Is this feeling illusory? It is no doubt quite genuine, but I do not think too much importance should be attached to it. We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of fallibility on the part of machines. Further, our superiority can only be felt on such an occasion in relation to the one machine over which we had achieved our petty triumph. There would be no question of triumphing over all machines. In short then, there might be cleverer than any given machine, but then again there might be machines cleverer again, and so on."

Like the terminal hypothesized in Turing's Game, or to cite a more vivid literary illustration, the machine in Vernor Vinge's 1981 novel True Names, the ALICE program is like the teletype that attempts to normalize the differences between human and machine intelligence. It is technically possible, though rarely done in practise, for the Botmaster to interrupt the conversation and talk live. Thus, ALICE is designed to confuse the client's feelings about whether it is a person or a machine.

In the 1980's the philosopher John Searle posed a famous gedanken experiment known as The Chinese Room Paradox. The idea is simple. Suppose that a person is sitting inside an enclosed room. He or she receives slips of paper with messages in Chinese, but the room attendant does not know Chinese. Instead, the attendant has a manual or book that tells him, "if you see squiggle sqiggle then draw this stroke and that stroke." When someone slips the operator a question in Chinese, he looks up the patterns in the rulebook and composes the answer.

To the Chinese speaker outside the room, passing messages to the hapless operator trapped inside, it appears that the person inside the room understands Chinese. To the Chinese speaker the replies make perfect sense, although the operator knows nothing of what he is writing.

Searle is pointing out that there is no real difference between "appearing to understand" and "really understanding" Chinese, or by inference, any other natural language. Appearance, illusion, and deception are important components of chat robot development. Perhaps the AIML chat robots do not solve the Chinese Room Paradox, but they at least expose its frontier. An AIML program is just the sort of rulebook that Searle envisioned. We might want to call the categories of ALICE, "the Chinese Room Operator's Manual".