Thursday, July 3, 2008

What do I mean, "dumb is smarter"?

I previously mentioned Google's page rank as an example of "dumb is smarter" -- doing better by not trying to understand anything deep or do anything clever. Some other examples:
  • On balance, chess programs that bash out large numbers of positions do better than ones that try to emulate human judgment.
  • SPAM filtering is an interesting arms-race-in-progress, but the most effective technique I've run across is a simple whitelist. Bayesian filtering worked well for a while and all it does is crunch numbers. Any filter based on rules is vulnerable to gaming once people figure out the rules.
  • One of the many long-raging unresovled debates in the financial world concerns whether you can do better by carefully picking which stocks you buy and sell and when, or whether you should just make regular purchases of an "index fund" that tracks, say, the Russell 5000 and forget about it. I'm not going to take sides on that one. Rather, the point is that it's a serious debate at all. (Ironically, one of the best-known proponents of the "dumb money is smarter" approach, Warren Buffett, is also considered one of the best stock pickers ever.)
  • Every so often, it seems, someone puts out a program that appears able write free-form prose or even take part in a natural-language conversation with a person. Its understanding and grasp of humanity seem almost uncanny. Sooner or later, though, it comes out that the putative AI is really just doing some simple manipulation and the reader is assigning meaning to what closer inspection reveals to be gibberish. Everyone goes back to what they were doing, and real progress in natural language processing continues slowly and painstakingly. The classic examples are Eliza and Parry. Racter and Mark V. Shaney come to mind as well. These days, people write "chatterbots", some of which are one-trick ponies and some of which are considerably more sophisticated.  [See this post on the other blog for a more recent example that grabbed headlines -- D.H. Dec 2018]
I'm not particularly knowledgeable about AI research, but my understanding is that there is a basic division between "strong" and "weak" AI. Strong AI has the rather broad goal of being able to do anything a human mind can do, ideally better. The notion of strong AI can be traced back to the Turing test, though there is debate concerning to what extent the Turing test is a valid measure of intelligence.

Weak AI aims to solve a particular engineering problem, say, beating a human master at chess or finding relevant web pages containing a particular set of terms. A weak AI does not need to claim to be conscious, or to have any particular understanding of the problem at hand. It just has to produce useful results. At which point, we decide that it can't possibly have any real intelligence, since we understand what it's doing and how.

In terms of the challenges of the 20th century, "producing a strong AI" compares to "curing cancer", while producing a weak AI is more like sending a rocket to the moon.

When I say "dumb is smarter", I'm not saying that strong AI is a useless goal, only that it's a very difficult, long-range goal which to this day is not particularly well-defined (though it is better defined than it was forty years ago). As such, it's more likely that progress will come in small steps.

Like anything else, "dumb is smarter" can be taken too far. The best chess programs tend to incorporate the advice of human grandmasters. When Google answers "What is the time zone for Afghanistan?" with a separate line at the top with the time zone, clearly it's doing some sort of special-casing. The absolute dumbest approach is not always the absolute best, but it does seem that the best trade-off is often much closer to it than one might think, and as a corollary, the dumbest approach is often a good place to start.

No comments: