My Photo

« Real Time Social Media Acquisition | Main | Google Earth/SketchUp 3D Campus »

April 01, 2007

Comments

Jairson Vitorino

Actually if you look at the attempt to define AI in Russel & Norvig book (Artificial Intelligence a modern approach, http://aima.cs.berkeley.edu/) they have a clear distinction between systems that:

i) Think and act like humans
ii) Think and act rationally

I tend to stick to the second definition because I suppose we are not interested in creating unstable intelligences (scary ;) ), but to emulate a perfect rational behaviour, which is not always the case when dealing with humans.

Alan

Thinking that AI will emerge simply if there is enough data in its database is like thinking you can make a brick fly if only you glue enough feathers to it.

Greg Linden

If it works, it's not AI anymore, eh?

No, seriously, you make a good point, but this does sound a lot like the debate between strong and weak AI

http://en.wikipedia.org/wiki/Strong_AI_vs._Weak_AI

where you appear to be coming down hard on the strong AI side.

That's fine, but, given how far we are from making anything even vaguely resembling strong AI a reality, it might mean you would have to answer my first question with a "yes" (at least for the next few decades).

Jake Lockley

But isn't the idea that sinple machines/systems compound to form complex systems? If I had a million intelligent agents working for me, couldn't I just write a meta agent that wraps all their capacity up into a "human behaving" interface? It's all the same to me. We too are justa collection of an infinite number of simple systems like our own ability to spell check. As for thinking and acting rationall and like humans, that's subjective. I don't know too many humans that act human or think rationally. If we create a machine that can teach or enforce critical thinking, that would be a start. Artificial Sentience will emerge, it's inevitable, our own sentience is the product of emergence.

Read Kevin Kelly's Out of Control The New Biology of Machines, Social Systems, and the Economic World it's online here:

http://www.kk.org/outofcontrol/contents.php

One of the best books I've ever read.

and also Cybernetica Principia has some great downloadable resources on the dynamics of system system theory which is all important because it is universally relevant -

http://pespmc1.vub.ac.be/

Chris Brew

There is pretty good evidence that there is a large measure of automaticity (or for CS people, precompilation) in human abilities in reasoning, planning, language and so on. AI people tend to get excited by the limit cases in which fancy levels of flexibility are needed, but these might be marginal for the purpose of reproducing what people usually do. My guess is that Google's spell checker does as well or better than an average copy-editor on a typical day, but doesn't approach the performance of even the average copy editor when they are paying full attention, much less the great copy editor on a good day.
Whether it is AI to approach the performance of tired people doing a rather boring job is something that I can live without trying to decide. But whatever you call it, it seems worthwhile.

Matthew Hurst

While I'm glad at the responses that my short post has provoked (as well as Fernando's longer post and his posse's comments) I suspect that I have been misunderstood on one point. I don't mean to use AI (as I understand it) as a distinction between utility and the lack of it. Clearly, a spelling correction algorithm is useful (unless you want your kids to learn how to spell that is - another story). The post was more about the perception of AI and the way in which the term is used for anything with non-trivial complexity.

Ian Parker

I think Google's vision is wrong but not for the reasons other people have. Because of the fact that there are Petabytes there on the Web the prime AI question is how to organize this. In fact the Google vision is a kind of weak/strong vision. The AI is weak but Petabytes (the fact that there is always a pat response to any question) makes it a strong debater.

No the reason why Google is wrong is the fact that they have no method of either indexing or correctly providing keys to retrieve.

Any retrieval system must feature Natural Language strongly. With Petabytes NL is the most important question of cognitive AI.

Petabytes + bueno espagnol ie. good translation = Turing. Do we have bueno espagnol from Google? Do we hell! "?Quieres dormir con fosforo?" Google does not understand the different types of "match". If it did I could say that I did not have a partener and was going to a dance. It would then invoke mastching software (there is a lo0t of it around). After the dance I might sleep with my "correspondento".

Google's vision is right but they show no sign of getting there.

The comments to this entry are closed.

Twitter Updates

    follow me on Twitter

    March 2016

    Sun Mon Tue Wed Thu Fri Sat
        1 2 3 4 5
    6 7 8 9 10 11 12
    13 14 15 16 17 18 19
    20 21 22 23 24 25 26
    27 28 29 30 31    

    Categories

    Blog powered by Typepad