Nicholas Carr writes the following in his Atlantic article titled Is Google Making Us [Stupid]?
Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. [Note: there is no citation for the Harvard Business Review article, nor can I find it online.]
The big surprise is that Google still uses the manually-crafted formula for its search results. They haven't cut over to the machine learned model yet. Peter suggests two reasons for this. The first is hubris: the human experts who created the algorithm believe they can do better than a machine-learned model. The second reason is more interesting. Google's search team worries that machine-learned models may be susceptible to catastrophic errors on searches that look very different from the training data. They believe the manually crafted model is less susceptible to such catastrophic errors on unforeseen query types.
Update: As John Battelle says, in response to Carr's piece:
I and "feeling" like I'm getting smarter.
Actually, I think John is wrong on this one - mistaking quantity for quality. I think he also hasn't grokked the article. Battelle complains that Carr is afraid of thinking in 'different' ways, when in fact the article is very much about the inability to focus attention due to the randomization that the net injects into our thinking. Any business that is monetized by frequency (of visits) must attempt to increase that frequency.