The field of AI has always been a big draw for me. Understanding the processes that yield intelligent behavior has to be up near the top of the list of the BIG problems (up there with origin's of life and origins of the universe).
However, the underlying premises of mainstream AI research during the 80's (and even today) have always struck me as wrong headed. First Order Logic is useful if you want to create a software that behaves as if it has intelligence in a limited domain but is behavior really intelligence? Turing thought it was (Turing Test) but many philosophers disagreed and this lead to much heated but ultimately worthless debates of whether Chinese rooms can be intelligent, and the like.
Back in my graduate school days I wrote a thesis arguing that intelligence needs to be built on a foundation more akin to computer simulation than to logic based inference. My thinking was that simulation drives the ability to make predictions about what will happen in the world and predicting what will happen is a prerequisite to applying rules to act intelligently. Prediction precedes inference.
It was thus very refreshing to listen to Jeff Hawkins' talk at TED. He too argues for a definition of intelligence based on prediction. His research is focused more on new types of memory architectures than software but I think his work will be the foundation for the kinds of software solutions that will one day give us a HAL.
Tuesday, April 15, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment