My Photo

« Social Media Analysis - Sentiment Metrics | Main | Scout Labs »

December 10, 2007

Comments

Leon

I have to agree whole heartedly with what you have said about human classification versus automated classification of sentiment.

In this field there is often the discussion of human vs. automated. With the volumes of data involved, using human classification can lead to a very expensive solution, and as you point out the accuracy of different labellers varies greatly.

Using our automated approach we hit on about 80% accuracy, which will get better with time as we retrain our machine learning algorithms. The incorrect classifications we see are generally due to sarcasm or miss spellings.

We always let the end user view the classification and the text too, so they can very quickly see the underlying data. This allows us to show a good high level view of overal sentiment on their brand, while also allowing clients to see individual comments.

We don’t feel that the accuracy of this automated approach is much different to a human based approach, and the cost savings allow us to bring www.sentimentmetrics.com to Market way below the competitors.

Thanks

Leon

Tom O'Brien

Hi Matthew:

Great summary - and I couldn't agree more. Accurate, useful and meaningful sentiment scoring is hard - and complex.

Doing a good job of automating it requires a hybrid approach - man + machine.

More here:

http://humanvoice.wordpress.com/2007/12/10/sentiment-detection-mining/

TO'B

KD Paine

If Microsoft doesn't trust computers to read for tone and sentiment, why should anyone else? They require Cymfony to use human coders for all their analysis. The answer is not automated vs human, but a combination of the two

the constant skeptic

I agree with the hybrid approach... or am I being sarcastic right now? :)

Peter Kowalski

I would suggest that there is a distinction between expressed authorial sentiment (the manifest measure outlined here) and the latent measure of the sentiment that individual texts help to build among members of various social media communities. Any automated process would be limited to either the manifest conceptualization or a latent patterned measure (which would require well-tested operations), while a latent projective measure, used widely to approach constructs that rest in a community's or person's experience, is still best approached through raters trained within an acceptable level of intercoder reliability.

I certainly agree that crude agreement of .4 is blatantly unacceptable (as all crude agreement is, I prefer Cohen's kappa), but the mistakes of the few, in that regard, cannot be considered a stain on content analysis methodologies that are used throughout the social sciences. I tend to agree with Neuendorf when it comes to latent measures--when instructing coders to count chairs, we can rely on their existing operations of what a chair is, rather than risk measurements that are made invalid by them following the letter, rather than a spirit, of a patterned law.

The question of volume is quite a different matter entirely, but can also be addressed through representative sampling, if general brand sentiment is the research question in mind, as long as findings are generalizable within an acceptable confidence interval.

A definite hat tip to everyone on the forefront of computer-aided text analysis. Its incorporation into content analysis is welcome, and I for one am looking forward to the release of Diction 6.0.

anham

A very interesting post (along with its comments of course) on the long-debated sentiment analysis of media articles or blog posts. Actually the debate did not wait for the advent of the web 2.0 to take place.

It has been mentioned above, human sentiment scoring of large data sets relies on the different mental approaches of the individuals doing the analysis. Hence different interpretations for similar contents. This difficulty can however be partially overcome with the help of rigorous analysis grids, like questionnaires, established by the study manager for the case at hand, along with their clients for instance. With such grids, giving guidelines as to how to interpret an article filled with sarcasm, quotations, understatements and so on, the analyst is not left sitting alone in front of the articles awaiting analysis. Furthermore, media analysis (and monitoring) having become a true profession, with dedicated experts, media analysts tend to finetune incrementally the way they interpret articles, based for instance on the language and the culture of the country where they are operating - as a corollary thereto, foreign outsourcing of media analysis should be done with great caution.

As far as the Internet is concerned, I actually think sampling is rather a good approach towards CGM analysis. Indeed, in the incredibly vast sea of articles and posts on a given topic, only few of them actually emerge to the surface, being visible for the human eye - and producing an impact on the human brain. The question is not whether sampling should be done or not, but rather how it should be done: how do you determine which posts are visible, or have authority if you will, on a given topic. At the end of the day, that's what brands will care about. What's being said that could have an impact upon my target audiences. Of course, the surface of opinions rests upon the various depths of less visible opinions sitting right below, hence the interest to analyse it as well. That's where our hybrid approach, human+machine, really makes sense.

sdey

I was looking into this a few months ago, but wasnt able to get good literature on this. Any recommendations for algos in NLP/ML for this field ??

indo

Interesting post.

Shouldn't (author, polarity, object) be: (author, time, polarity, object) - as sentiment is not necessarily static. Perhaps it depends on the resolution of capture w.r.t to time, however for individuals sentiment can change drastically in a short space of time, being able to map that could definitely be useful. Add to that correlation of timelines and it may indicate who is talking to/reading who (not all bloggers link to sources).

Thoughts?

charlie salem

could any readers send me info or links to rivals or alternate, better types of service providers than sentiment metrics who offer a downloadable tool for our account management people to work with.
My e mail is [email protected]

The comments to this entry are closed.

Twitter Updates

    follow me on Twitter

    March 2016

    Sun Mon Tue Wed Thu Fri Sat
        1 2 3 4 5
    6 7 8 9 10 11 12
    13 14 15 16 17 18 19
    20 21 22 23 24 25 26
    27 28 29 30 31    

    Categories

    Blog powered by Typepad