Novels are full of new characters, new locations and new expressions. The discourse between characters involves new ideas being exchanged. We can get a hint of this by tracking the introduction of new terms in a novel. In the below visualizations (in which each column represents a chapter and each small block a paragraph of text), I maintain a variable which represents novelty. When a paragraph contains more than 25% new terms (i.e. words that have not been observed thus far) this variable is set at its maximum of 1. Otherwise, the variable decays. The variable is used to colour the paragraph with red being 1.0 and blue being 0. The result is that we can get an idea of the introduction of new ideas in novels.
In the first book - Austen's Sense and Sensibility - we can see two things. Firstly, the start of the book keeps a pretty good degree of novelty for the first few chapters. Secondly, each chapter introduces something new.
The second book - Stevenson's Kidnapped - shows a different pattern. While it starts off with reasonable novelty, this then dies out for most of the book with spurts of interest here and there.
What is surprising to me (if we take any real meaning from this approach) is that Austen's Emma - the third book - is strong out of the gate (the first 18 chapters) but fails to break the 25% novelty ceiling thereafter.
[Note that these results are preliminary and I'm going to do more validation and testing.]
Update: see below the original visualization for an updated version with more accurate results.
Update: After looking at the above results I drilled down on the strange behaviour in Emma. It turns out that Emma as multiple volumes within which the chapter counter reset to I. Consequently I was picking up chapter titles (I, II, III, VI, V, etc.) as novel terms the first go round and this was driving the visualization. I've since modified the algorithm to firstly ignore text blocks (paragraphs) with fewer than 5 words and secondly, given it a more dynamic colour scheme.
This improvement still highlights some key differences (again, in as much as the algorithm is correct). However, these differences are now somewhat changed from the first set of observations. Note also that the threshold for novelty has been decreased to 0.1.
The idea of visualizing lexical novelty is very interesting. Could you possibly also visualize word uniqueness, i.e. which paragraphs contain words that are used the least in the whole book, or maybe even in the author's whole body of work? Perhaps that could provide a more visually balanced view of where the author was at her most creative.
Posted by: Michał Tatarynowicz | September 18, 2011 at 07:40 AM
Mattheuw, we had a very similar problem. We needed to find out top-k prominent words in tweets during specified (6,12 hour) time intervals. But also we needed to keep in mind how these words are used in future intervals (book sections in your case). If the word is used frequently, the first interval it appeared became more important.
We modified tfidf to work with time. First we found a prominence value for words in intervals, then we updated the prominence. You can find out more about it in the paper: http://www.cse.ohio-state.edu/~hakan/publications/soma2010.pdf
Posted by: Cuneytgurcan | September 18, 2011 at 11:56 AM
Very interesting visualization.
It immediately made me wonder what the map of Gravity's Rainbow would be. I'm guessing solid red.
Posted by: ChdrGingrClogne | September 18, 2011 at 12:08 PM
I think my digital humanities friends will love this. I've only been visualizing term occurrences in my vis. tool, but you just made me realize there are so many other metrics that can benefit from this kind of visualization.
Posted by: Silverasm | September 21, 2011 at 11:46 AM
This is an interesting set of visualizations of novels, but I have some reservations about your initial question and the results. I know the work is in a very preliminary stage, so my comments are more provocations than criticisms. First it's not entirely clear what you mean by 'novelty'. It seems like your methodology is entirely about novelty within each text, in which case the results you get for the two Austin novels aren't very surprising. By definition almost the early parts of a novel will be all 'new' words because they haven't been used before in that work, and as you go through the novel it stands to reason that the number of novel words would decrease. The one strange thing that seems to happen is that when you change the threshold to .1 then Stevenson looks much more anomalous and doesn't follow the pattern, either of the other two novels or of itself when the threshold is .25 - without some detail about what is actually driving this (the text itself) it's difficult to say what this means. It would be great if pointing to a given color block in the visualization would let the user see what the novel words were in that passage.
Posted by: Seth_denbo | September 21, 2011 at 02:50 PM
Great comments Seth. The notion of novelty here is simply if the word has been seen already. There are other things we could do (e.g. novel combinations of words). I agree with you about the arbitrary nature of the .25 threshold. I'll see if I can spend some time on addressing both these issues.
The basic goal is to provide accurate and interesting visualizations of the texture of literature. A long way to go!
Posted by: Matthew Hurst | September 21, 2011 at 03:55 PM
Very fascinating. Is the code open source somewhere? I am curious to try it out.
Posted by: Weston Platter | November 10, 2011 at 01:45 AM