October 14, 2003
Breaching the code zone
We've been talking lately about the similarities and differences between the organic, fallible human world and the cold, logical realm of the machine. We've also been discussing attempts to bridge the gap between the two spheres, usually in the form of Ullman or one of her fellow programmers tackling machine code or something similar. But I've come upon an emissary from the world of machines, a system with the basic goal of learning about human cognition by asking us questions. It's actually a pretty simple idea - the computer has to guess what word you're thinking of in 20 questions or less - but it fascinates me as a kind of inverse Turing Test. Significantly, the answers available to each question are not simply "yes" and "no", but include "probably", "depends" and "partly" among other options - much more complicated than the "either OK or Cancel" responses we're used to feeding into computers. Once the system has guessed your word, it then internalizes any incongruities between your answers and those it expected, in order to formulate its own complicated definitions of the English language. I have to wonder if a program like this might eventually be able to perfectly replicate human speech, mirroring every subtle nuance of our language learned from millions of interactions like this simple guessing game. Or maybe I'm just paranoid.
Anyway, try it out, it's fun (and a little scary). Here's the link: http://y.20q.net:8095/btest
Posted by cmeyer at October 14, 2003 01:09 AM
I went to the site in question and tried about five or six different words. Most of the time, it took the program 19 tries to guess the right answer. When I looked at the results of the game we played together, I couldnít help but notice the inaccuracies in the programís answers when they were incongruous with my own. I know a cat has fangs. The program said otherwise. I laughed at the computer screen and felt proud of myself. However, after that feeling faded, I realized that I was thinking of the program in terms of its computer-ness (sorry, but the word just sounds so right). It was like a calculator told me that 2+2=5. The fact that the program made an inexplicable error like that makes me stop laughing and worry about the computers that will some day make up our ruling class.
Yeah, Chris, it is a little scary, I agree. How did it know I was thinking of a mango after such a seemingly simple round of unexceptional questions? Why didn't it guess orange, or papaya? Was it just because "mango" is the most likely guess, given the high number of people who have played the game thinking of mangos versus of papayas? In the end, did it just come down to probability?
It's a great example of how computers are capable of "learning" things and then adjusting/adapting themselves to factor in the new knowledge it has gained.
The more I think about it, and compare it to how I think my own process of learning/understanding language is, maybe we really are on the path of creating computers that "perfectly replicate human speech, mirroring every subtle nuance of our language learned from millions of interactions like this simple guessing game."
Sure, a computer could never "know" exactly what I was thinking when I used a given sentence. It would just have to choose to understand my sentence as the highest probable intended meaning given its understanding of human language and how language is typically or usually used, based on its own experiences.
But that is what we always do when we try to understand what someone else is saying, isn't it? (Or is it?)
Very cool program, thanks for the link Chris.
I was thinking of a "bottle of whiskey" and it guessed "whiskey." Damn. Like Audre, I looked over the list of seemingly simple questions, and I couldn't quite distinguish what made the difference - whyt not a bottle of wine? A bottle of cola? Hmmm.