The fallibility of introspection as a means to understanding consciousness is well known and understood. Tempting as it may be to refer to the 'voice' inside our head - the inner monologue, or the thought process - one can never win any arguments by playing this card. While we may, at the most, use the common experience as an indication of the separation between conscious and subconscious thought, we can't claim that intelligence works thisway or that way by summarizing a thought process.
If we could do that, then we would simply declare that intelligence involves inference, self-awareness, symbolic reasoning, etc. This argument can be brushed aside by reasoning that we have no evidence that any of our inner monologue, or stream of consciousness is a prime mover - rather, it may well be a post hoc phenomenon.
However, when it comes to communicating with other agents in what we perceive to be the real world, we have created an interface that does appear to have all of these nice qualities: symbols, structure, stereotypes and so on are all used to externalize our thoughts and as an input mechanism to grasp the inner workings of our fellow beings. And while it is attractive to believe in the emergence of intelligence via huge data sets and massive but simple processing power, that intelligence will arise from the simplest machines if only we throw enough data at them - the fact of the matter is that much of what we learn as humans we do so by the consumption of structured symbols of various types.
How do you know [that] the power to generalize [doesn't come from massive scale]?
Fernando's post is somewhat confusing. He argues that scientific discovery is perhaps the most celebrated example of the qualities of intelligence that I require for AI. Science is perhaps the most formal, structured, symbolic and hierarchical form of communication that society has created. Fernando's example of scientists creating machines to mine genomic data for repeated structures is, he claims, one that supports the use of scale for AI. But how did we get from the genomic data - represented as simple sequence - to the problem of finding patterns in it? That requires all the symbolic, hierarchical structured knowledge: the genetic model.
In (partial) answer to Fernando's question - clearly the parallelism of the brain is considerable, but that is not the type of scale that Larry Page is talking about (that is to say, the symbols - or units/mechanism of representation - and operations involved are quite different).