Thursday, December 27, 2007

100 and (I hope) counting

According to the "Blog archive" heading, this will be my 100th post to this blog. Stephen Jay Gould took a similar opportunity to tell us, finally, about his field work on Bahamian land snails. I'm more with Eubie Blake, who celebrated a 100th birthday and who said "If I'd known I was going to live this long, I would have taken better care of myself."

I won't be writing about my equivalent of Bahamian land snails -- I wish I had something so interesting to draw on -- but on something more apropos of Eubie Blake. Blake, as it turns out, really only lived to be 96 (most of us should be so lucky), and that seems as good a point as any to pick up a thread that's been running through this blog more or less from the beginning: imperfection.

When electronic computers first entered the popular consciousness sometime after World War II, their defining property was perfection. If the hero needed the answer to an intractable problem, the computer was always there, ticking away impassively. On the darker side, the flawless, emotionless and relentless android, aware of its own perfection and our human inferiority, was a stock villain.

The computer was the ultimate in modernism. Its rise coincides, perhaps not coincidentally, with the shift from modernism to whatever we're in now, variously postmodernism or late modernism, depending on whether you want to emphasize change (how modern) or continuity.

The notion of the all-knowing perfect computer dissolves rapidly on contact with actual computers. One of my early experiences in computing was meeting my dad's friend Herb Harris, who ran a computing facility in the building . I vaguely recall watching cards being punched and read, but I definitely recall suggesting that you could use a computer to store everything in the encyclopedia (and therefore, all of human knowledge).

Herb loaned me a book I still have, somewhere, on programming the IBM 360. He also gently prodded me to consider what putting an encyclopedia in a computer would mean, particularly the question of how you would find the information once you got it there. To give you an idea of the hardware of the time, the book contained a recipe for doing decimal multiplication by use of a multiplication table you could read in from external storage. I concluded that the problem was harder than it looked, but still ought to be at least partially solvable, with somewhat better hardware. Maybe I'd get back to it later ...

Now we have vast collections of textual material available via computer, and we have at least one usable way of finding the information that's there. We even have encyclopedias on line. All this information, its storage and its retrieval deal intimately in imperfection. Some examples:
  • Dangling links are explicitly allowed in the web. This is not an accident but a basic tenet of web architecture. Allowing links to point to nothing means that you don't have to build a whole site at once, or even know that it will ever get built. Among other things, dangling links are a key part of the wiki editing experience (not as much fun if you just want the information, though).
  • The underlying protocols the web is built on assume that messages routinely get dropped or duplicated in transit (TCP), that the information you are looking for may in fact be somewhere else (HTTP), or that the server you're ultimately trying to reach may be down (HTTP again).
  • Documents are given logical addresses, not physical addresses, on the assumption that information may be physically moved, without notice, at any time. For that matter, computers themselves also generally go by logical names. There is no one perfect physical realization of the web.
  • The web inherently doesn't assume that any given document is the last word on a given subject. Search engines generally give you some idea of how well-connected a page is, but this can change over time and in any case it's only a hint. Anyone can comment on a page and incorporate that page by reference.
  • You can't take anything on the web at face value, or at least you shouldn't invest too much faith in a page without considering where it came from (which you don't always know) and how well it jibes with other sources of information. This sort of on-the-fly evaluation quickly becomes a reflex.
  • From a purely graphical point of view, there is no definitive format for a given web page. If you try to lock everything down to the last pixel it will generally look bad on displays you didn't have in mind. If you don't, it's up to the browser at the other end to decide what it looks like, and with CSS and other tools, the viewer can have almost unlimited leeway. Nothing is perfect for everyone, so we try to get close and allow for tweaks after the fact.
  • A key part of running a successful web site is managing details like backup, maintaining uptime in the face of hardware failures and (one hopes) dealing gracefully with large numbers of people pushing the limits of your bandwidth. This is hard enough that you generally want to farm it out.
There are many more examples, and probably much better ones as well. The point here is that even when it looks like the system is working just fine, imperfection is everywhere. The web tolerates this rather than trying to stamp out every last flaw, and in some fundamental ways even builds on imperfection. The result is far more powerful and useful than a computer that never loses at chess or never makes an arithmetical error.


Postscript: Herb Harris is no longer with us, but the University of Kansas student computing lab bears his name [... or it did for a while.  Somewhere around 2010 the trail seems to go cold.  Now that everyone has a phone or laptop and storage is in the cloud, there's no longer such a need to go to a place in order to compute.  The building is still there, but it now houses "Information Technology".  Sic transit ...  -- D.H. Dec 2018]

No comments: