(In this age of lightenting fast communication I suspect you all already know that Judea Pearl won the Turing Award. Even so, it is worth posting about.)
has won the 2011 Turing award (given in 2012).
here for the announcement).
He is not a theorist; however, he did champion the use
of probability and graph theory in AI.
is the attempt to make computers match or exceed human
People in Strong AI might see how humans do things
and try to get a program to mimic that.
Putting aside philosophy (are we all just rather complicated
DFA's?) the goal of Strong AI seems to just be hard to do even if it is possible
has more modest goals (and a much shorter Wikipedia Page)-
lets get a computer that
can do a well defined task (e.g., Medical Diagnosis) well.
They want to get things to work and may very well NOT
take how humans do it as an inspiration.
Do humans do the kinds of probabilistic calculations that
Judea Pearl works with? I tend to doubt it.
Once Strong AI produces real results it is called Weak AI.
Once Weak AI produces real world products its called something else
(Robotics, Nat. Lang Proc, Vision, there are other examples).
I ask all of the following nonrhetorically.
NONE of the questions are meant to question the award.
Did Judea Pearl work in Strong AI to Weak AI?
Was Judea Pearl one of the first people to incorporate serious AI on a more rigorous foundation?
Does AI use serious math? (I know that Control theory does, though is that AI?)
Did Judea Pearl (or his group) build software that is actually being used someplace?
What is the criteria for good work in AI?
Who might be the next AI Turing Award Winner?