Monday, November 23, 2009

Rupert and the interwebs, again

If I understand recent reports correctly, Rupert Murdoch's latest attempt to wring money out of online content rests on a simple, intriguing concept: if you can't control what people are reading, control whether they can find it. So News Corp will be partnering with Microsoft and against Google by blocking access from Google's web crawlers and charging Microsoft for the privilege of indexing News Corp content on Bing. With a 10% or so share of search volume (to Google's 60%), Microsoft is eager to give people a reason to switch. News Corp gets paid, so their end is pretty easy to understand.

On the face of it this seems like a pretty interesting test case of free vs paid content, and the search angle is clever, but just how is this going to work? First, there's the sheer business angle: If I want to read free content from the Wall Street Journal online, it's not exactly hard to find. If I just want to find out about XyzCorp, am I really going to notice that there's nothing there from the Journal? If I am the sort to notice, seems there's a pretty good chance I'm a Journal reader anyway.

For Microsoft, becoming known as the News Corp search engine could be a double-edged sword. It raises obvious issues of bias and could reignite the Microsoft-as-Evil-Empire fire which seemed to have died down of late (or maybe I'm just older now and have worked with enough perfectly reasonable Microsofties).

But more than that is the technical angle. I'm guessing it's going to take, oh, five minutes for some enterprising free content advocate to put up a site in some News Corp-unfriendly jurisdiction that will present essentially the same profile to search engines as the Journal or whatever, without or without actually violating copyright laws. At which point, without any involvement from Google, News Corp is back on Google.

I'd think it would be difficult for Google to stop this sort of thing even if it wanted to, and I'm not convinced they'd want to. They could explicitly blacklist sites, I suppose, in the usual endless cat-and-mouse, but as for automatically figuring out whether a site was just a front for someone else's? Technically enforcing "you can read it but you can't search it" smells like another example of modern-day perpetual motion. Enter the lawyers.

One way or another, there will be lawsuits. My guess -- and I suppose this would be a good time to dust off the old "I am not a lawyer" disclaimer -- is that News Corp will try to assert some right to control searchability, perhaps drawing on existing case law involving reference works and such, but I doubt it will get very far. If it did, it would be game-changing, and not necessarily in a good way.

No comments: