There has been a lot of commentary recently on issues relating to an experimental chat bot that Microsoft has (or had) launched named (after, perhaps, a river in Scotland) Tay. After a brief existence online, the bot was removed due to behaviours perceived as offensive which it was persuaded to engage in. Peter Lee of MSR has this to say about it. While there is much to learn from what transpired, the thing that irks me the most is the continued use of the term Artificial Intelligence to describe these systems - Lee actually calls it an 'artificial intelligence application'. Experimenting with these interactive agents is, no doubt, a useful activity that will teach us much about how humans will interact with actual AI entities in the future, but calling a chat bot of this nature an artificial intelligence application is like calling the icing on a cake, a cake. Communicating with humans is essential to artificial intelligence; communicating as a peer in human language with not much else going on 'upstairs' is not, however, a demonstration of artificial intelligence.
Where does AI actually begin? There was clearly some learning going on (and, amusingly, abused). At what level does learning + analysis + adaptation + goal become "AI"? While this attempt was most likely more marketing than computer science research; (specifically) why isn't it a form of AI?
Posted by: Dave Steckler | March 30, 2016 at 08:30 PM