On the way back from Boulder, I picked up Business Week. I read therein yet another story about Google which raises the bar, perhaps, for main stream coverage of Google paranoia. Here's the link to the story.
With all Google coverage of late, regardless of the author's affection or repulsion for the company, artificial intelligence (AI) is nearly always mentioned. The central theme of Google's AI is that massive scale, vast data sets and planet-sized computers will, eventually - almost naturally - result in AI.
This is a weak 'vision'. The reason it upsets me is that driving for scale of this type sidesteps the fundamental power to generalize. Human intelligence excels at establishing and exploiting generalizations. It is fundamental to language, reasoning, logic, philosophy, music, and thought itself.
Artificial intelligence as a term has for many reasons, been diluted over the last decade. While the behaviour of such an intelligence as envisioned by Larry Page may not be that much different from mine, evidence of the terms maltreatment can be found in some of the additional content. Here, when Schmidt (Google's CEO) is asked about AI, he notes:
Our spelling correction...is an example of AI.
Nice.
To be honest, when I talk about AI, I really mean: systems that exhibit human-like intelligence (which could be far more powerful in some dimension than a human, but ultimately with a capacity to reason, conjecture, plan and execute). AI, as used by Eric Schmidt, clearly means something more like: a useful tool.
Actually if you look at the attempt to define AI in Russel & Norvig book (Artificial Intelligence a modern approach, http://aima.cs.berkeley.edu/) they have a clear distinction between systems that:
i) Think and act like humans
ii) Think and act rationally
I tend to stick to the second definition because I suppose we are not interested in creating unstable intelligences (scary ;) ), but to emulate a perfect rational behaviour, which is not always the case when dealing with humans.
Posted by: Jairson Vitorino | April 01, 2007 at 09:20 AM
Thinking that AI will emerge simply if there is enough data in its database is like thinking you can make a brick fly if only you glue enough feathers to it.
Posted by: Alan | April 01, 2007 at 02:20 PM
If it works, it's not AI anymore, eh?
No, seriously, you make a good point, but this does sound a lot like the debate between strong and weak AI
http://en.wikipedia.org/wiki/Strong_AI_vs._Weak_AI
where you appear to be coming down hard on the strong AI side.
That's fine, but, given how far we are from making anything even vaguely resembling strong AI a reality, it might mean you would have to answer my first question with a "yes" (at least for the next few decades).
Posted by: Greg Linden | April 01, 2007 at 05:15 PM
But isn't the idea that sinple machines/systems compound to form complex systems? If I had a million intelligent agents working for me, couldn't I just write a meta agent that wraps all their capacity up into a "human behaving" interface? It's all the same to me. We too are justa collection of an infinite number of simple systems like our own ability to spell check. As for thinking and acting rationall and like humans, that's subjective. I don't know too many humans that act human or think rationally. If we create a machine that can teach or enforce critical thinking, that would be a start. Artificial Sentience will emerge, it's inevitable, our own sentience is the product of emergence.
Read Kevin Kelly's Out of Control The New Biology of Machines, Social Systems, and the Economic World it's online here:
http://www.kk.org/outofcontrol/contents.php
One of the best books I've ever read.
and also Cybernetica Principia has some great downloadable resources on the dynamics of system system theory which is all important because it is universally relevant -
http://pespmc1.vub.ac.be/
Posted by: Jake Lockley | April 01, 2007 at 08:22 PM
There is pretty good evidence that there is a large measure of automaticity (or for CS people, precompilation) in human abilities in reasoning, planning, language and so on. AI people tend to get excited by the limit cases in which fancy levels of flexibility are needed, but these might be marginal for the purpose of reproducing what people usually do. My guess is that Google's spell checker does as well or better than an average copy-editor on a typical day, but doesn't approach the performance of even the average copy editor when they are paying full attention, much less the great copy editor on a good day.
Whether it is AI to approach the performance of tired people doing a rather boring job is something that I can live without trying to decide. But whatever you call it, it seems worthwhile.
Posted by: Chris Brew | April 02, 2007 at 10:24 PM
While I'm glad at the responses that my short post has provoked (as well as Fernando's longer post and his posse's comments) I suspect that I have been misunderstood on one point. I don't mean to use AI (as I understand it) as a distinction between utility and the lack of it. Clearly, a spelling correction algorithm is useful (unless you want your kids to learn how to spell that is - another story). The post was more about the perception of AI and the way in which the term is used for anything with non-trivial complexity.
Posted by: Matthew Hurst | April 02, 2007 at 10:53 PM
I think Google's vision is wrong but not for the reasons other people have. Because of the fact that there are Petabytes there on the Web the prime AI question is how to organize this. In fact the Google vision is a kind of weak/strong vision. The AI is weak but Petabytes (the fact that there is always a pat response to any question) makes it a strong debater.
No the reason why Google is wrong is the fact that they have no method of either indexing or correctly providing keys to retrieve.
Any retrieval system must feature Natural Language strongly. With Petabytes NL is the most important question of cognitive AI.
Petabytes + bueno espagnol ie. good translation = Turing. Do we have bueno espagnol from Google? Do we hell! "?Quieres dormir con fosforo?" Google does not understand the different types of "match". If it did I could say that I did not have a partener and was going to a dance. It would then invoke mastching software (there is a lo0t of it around). After the dance I might sleep with my "correspondento".
Google's vision is right but they show no sign of getting there.
Posted by: Ian Parker | April 06, 2007 at 02:31 PM