A couple of posts provoke an interesting discussion: William Cohen points to the issue of the popularity contest approach to ranking which may have undesirable consequences; The Measurement Standard provides an interesting angle on the fallibility of human judgments. In the area of sentiment analysis, one often hears skepticism around the quality of results (or even the possibility of automating this task). It is always informative to see how well humans do at these tasks (most reports in the literature of inter-labeler agreement for sentiment are pretty poor). My feeling is that while an automated approach can never be error free, the systematic nature of the errors lead to a more manageable result than the randomness of human error generated by poor methodology.
As for the issue of automated ranking of web pages. The problem cited above exposes the frailty of addressing a content problem (finding a document whose text is appropriate) via an orthogonal structural solution. The structural solution (counting links and propagating results) may do well in some domains where it is regarded as a proxy for measurements of 'authority', however, the ambiguity in the structure cannot be determined, leading to the type of problem William cites. This is where solutions like Powerset come in.
Note: I'm really happy to see that William is writing - grab his feed!