In a recent answer to The Edge Annual Question Roger Schank said:
When reporters interviewed me in the 70's and 80's about the possibilities for Artificial Intelligence I would always say that we would have machines that are as smart as we are within my lifetime. It seemed a safe answer since no one could ever tell me I was wrong. But I no longer believe that will happen.Of course, GOFAI won't do it!
The wrong headed directions promoted by the biggest stars of AI have done much, much harm to AI, even leading (in 1984 already!) to a panel discussion at AAAI-84: The Dark Ages of AI where Schank himself acknowledged the problem, even later elaborating on his own view of the difficulties he said:
It is not that AI needs definition, it is more that AI needs substanceWrong!
AI does need a definition!
No wonder that this didn't spur any improvment in the field, the culprit was the obsessive focus on logic that he now recognize in the Edge answer :
Early AI workers sought out intelligent behaviors to focus on, like chess or problem solving, and tried to build machines that could equal human beings in those same endeavors. While this was an understandable approach it was, in retrospect, wrong-headed.Yet, he doesn't really grasp why this was wrong-headed, it's not because "Chess playing is not really a typical intelligent human activity".
He hints at the real cause ("How can we imitate what humans are doing when humans don't know what they are doing when they do it?") but doesn't quite come thru to the full conclusion:
We need to know what we want to do, no just tinker around with "promising" research projects.
The various current crazes about natural language processing, robotics and even statistical learning will be as deceptive as GOFAI, may be bringing a few high-tech gimmicks on the side but not of more decisive import than expert-systems which were the main outcome of GOFAI.
The common trap in which all AI researchers seem to fall is that they are always working on already human-elaborated concepts and whatever "good tricks" they find for processing and "massaging" those concepts more efficiently they are oblivious to the fact that they made up the concepts at start not the computer.
The most basic question therefore seems to be what do we put in a concept?
When and how do we come up with the words we use to talk and think, apple, water, red, line, cosine, mountain, etc...
In the rest of the Edge answer Roger Schank basically give up.
Well, good riddance sir...
One less nearly useless budget eater!
Submitted by Kevembuangga