Besides the Turing machine, Alan Turing also developed what we now call the Turing Test in Turing's seminal 1950 AI paper "Computational Machinery and Intelligence". Turing describes his imitation game and says if a judge cannot distinguish between communicating with a computer versus a human than this is an indication of that a computer "thinks".
It's not clear exactly when a computer will pass the Turing Test but with systems like Watson and Siri we are getting very close. One can imagine that if we run the right machine learning algorithms on large data sets like Facebook chats or text messages, we can create a system that would fool most people. But would it really be intelligent?
Think about how Google translate works. There is no understanding of the meaning of the words in either language, just a statistical approach to develop a function that maps one language into another. Is this method really intelligence, are Google's computers "thinking"? Not really.
In a couple of years it will be clear that computers easily pass the Turing test. It will be another milestone, like beating humans at Chess and Jeopardy. The computers won't be any more intelligent by passing the test but they will be more useful. And I'll take useful over intelligent any day.
Turing addresses essentially this objection in his article. I like the way he puts it:
ReplyDelete"This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe "A thinks but B does not" whilst B believes "B thinks but A does not." instead of arguing continually over this point it is usual to have the polite convention that everyone thinks."
Yeah, but maybe all us humans are all just fooling each other into thinking we're understanding. Is there really any pragmatic difference?
ReplyDeleteMachines are nowhere near passing the Turing test. A "bot" that replies "OMG" to everything will fool some people. The top recent Turing competitor had no idea how to respond to, "I hear congratulations are in order." I watched the Jeopardy contest and it seemed clear that the questions were skewed towards Watson, with more questions of fact and fewer verbal puns. If you have a program that isn't really intelligent, it will not convince a panel of interrogators that it is intelligent. Calling anything less than such an interrogation a "Turing" test is missing the point.
ReplyDeleteThe phrase `Pass the Turing Test' is too ambigous to really say that in time computers will or won't do so.
ReplyDeleteIn small domains (like Watson, Suri, Chess Playing Programs) work in computer behavious will be like humans.
In larger domains the question is harder to pose.
I think we can say with certainty that Watson and Deep Blue do not play Jeap and Chess the same way that Jennings and Kasporov do. The fact that humans do as well as they do against (say) a chess program that looks x moves ahead (I'm not sure what is is nowadays) says something about human intelligence, but I"m not sure what.
Don't both Watson and the humans memorize vast numbers of set blocks of moves? Don't they both look many moves ahead? I wouldn't call the way human masters play any more like how a normal person plays that the computer approaches.
DeleteWhat is Jeap?
DeleteDeep Blue's approach for chess is as un-natural as Kasporov's really.
DeleteWatson's approach for Jeopardy! seems more like a person with a photographic memory and is impressive in terms of "AI" since it does have to figure out what stored information gives the question to the provided answer.
Do you know the book The Most Human Human? It's a great read in all what is the Turing test and what matters in communication between humans and computers.
ReplyDeletehttp://www.amazon.com/dp/0307476707/
Scott Aaronson held a course at MIT in the fall called "Philosophy and Theoretical Computer Science". There was a student project about criticisms of the Turing test, which I found instructive.
ReplyDeleteWe are placing an unfair burden on computers when we demand that they behave like humans to prove their intelligence.
ReplyDeleteNote that totally different standards are used in the study of animal intelligence.
For instance, a rat will be deemed intelligent if it can find its way in a maze. No one expects the rat to be able to impersonate the lab's PI.
It never ceases to amaze me that anybody believes that I am actually conscious. I keep waiting for people to realize that I'm just a meat computer, with no "mental states" or "qualia" whatsoever, just firing neurons. I do not think; therefore, I am not.
ReplyDeleteBut assuming that the rest of you are conscious has proven to be useful, so I'll run with it.
Think about how Google translate works.
Not very well?
I think that the philosophy of machine intelligence is primarily a question of computational complexity, and I'm surprised that you're not phrasing it in this language. Let me give an example. Suppose you had a computer (an extended version of Watson, in essence) that stored a giant lookup table with resonable continuations to every possible beginning of a conversation. It's quite obvious from your description of Google Translate that you (rightfully so, I think) don't believe this represents machine intelligence, even if it passes the Turing test.
ReplyDeleteThe reason is not simply because such a machine cannot exist (the lookup table is too large). In a philosophical or mathematical light, many more ridiculous situations have been used to argue a point. The reason is because we somehow associate intelligence with computing in a bounded environment. We imagine some sort of 'reasoning' must happen inside of the computer, or that it must have some sort of small internal representation of the world to do its business.
Once we recognize that this is the test, the obvious modification to make to the Turing test is to impose both space and time bounds on it. This raises more philosophical questions, such as what is a reasonable time bound on human processing? On one hand, it requires years to learn language to the point where we can engage in reasonable conversation; on the other hand, once we're there our conversations essentially require constant time regardless of the query. You might even go so far as to say our stored opinions of the world are put in a lookup table and the only processing is how we phrase it for a given conversation. And of course that brings up the space bound, too...
So I think the Turing test is still interesting, but it requires a different focus that might not have been around when Turing was doing his work.
There was a "Turing Test" done a while ago where the humans were ALSO being identified as computers, almost as much. This was said to show that the computer passed since they were indistinguishable from humans.
ReplyDeleteWell, Siri is *very far* from even remotely human :-), quite an annoying program, that. However, human-like intelligence from man-built systems is not very far in the future. A good and realistic attempt is what Jeff Hawkins describes in his book "On Intelligence". According to him, human intelligence is essentially the ability to predict based on information available to the system. He bases his approach, however, on real-brain neuronal studies. As strange it might sound there is nothing wrong from learning from the brain itself, if we want to build something that behaves similarly. That, however, involves moving away from the world of classical computer (and complexity) science towards that involving processing and representation by massive analog and/or discrete and noisy circuits. I would not be surprised if there would be results/theorems in this analog-discrete hybrid world for which there would be no *known* counterparts in classical cs, but which would be essential for human-like devices. Fortunately there is now an increasing (experimental) activity in brain research and there is also more research within the analog massive circuitry realm, so I am increasingly optimistic that there will be some big advancements towards human-like reasoning systems.
ReplyDelete"In a couple of years it will be clear that computers easily pass the Turing test. It will be another milestone, like beating humans at Chess and Jeopardy."
ReplyDeletePersonally, I think passing a serious Turing Test, with good and motivated judges and human participants and plenty of time, is qualitatively radically different from winning chess or Jeopardy, and using current techniques even with access to all the conversations that have ever taken place won't get anywhere close... This is different from keeping things ambiguous for a few short questions that the judge didn't design very carefully, which is not very interesting. For example, with enough time, you could engage in a detailed novel fantasy with the other party.
Turing's original test was the imitation game. On IRC and instant messenger, bots participate in conversations without people knowing that a machine is one of the participants. Computers pass Turing's "Imitation Game" test every day, and we no longer think it remarkable. Thus, in keeping with the traditions of AI, we moved the bar further along and now demand that computers pass some sort of interrogation by people trained in asking questions that are hard for computers and less hard for people. I think humans will fail this form of the "Turing test" before any computer passes it. I don't think I could convince someone, over IRC, that I was a human and not a bot.
ReplyDeleteThe old AI example of being able to "read" the written word has been pushed away as not intelligence, yet being able to read the scrawls in the box on this page is how they decide that I am human enough to post this.
DeleteI frequently misread CAPTCHAs. Is that a lower-case c or an obscured o? ol or d? l or i or 1? g or q? Even worse, computers are getting better at solving CAPTCHAs, which is driving the test creators to make them harder to read, which is making it more and more likely that I am going to be ruled a "robot" at some point in the future...
DeleteArgh! It even happened while I was posting this response!
I just got a Nexus 7 tablet in the mail a few days ago. I installed the assistant app and it didn't pass the Turing test I gave it. However, the tablet itself did pass the Voight-Kampff test that I applied.
ReplyDeleteit had probably been given memories to think it was a real tablet (or it doesn't have an iris to measure)
Delete********************
ReplyDelete"We all have friends we love dearly that couldn't pass for human in a strict Turing test."
— Penn Jillette
********************
On occasion, I doubt even my own self! :)
0101001101101111001000000110010001101111001000000100100100101110
DeleteThe above is the ASCII code for "So do I."
DeleteIncidentally, these CAPTCHAs ("Completely Automatizable Turing Tests...) are getting harder---I've failed it twice on these comments.
We are *decades* away from passing an unrestricted Turing test -- so far away that almost no reputable researchers are even trying and the Loebner tests are occupied only by nonacademic hobbyists and people trying to sell chatbots.
ReplyDeleteThere is a good recent (2004) book by Stuart Shieber who is among other things an NLP researcher and understands the technical issues.
Computers may soon get to the point to convince virtually anyone they are 'thinking' even though, like Google Translate, they're really not "thinking." One may need more interaction with a computer than typed messages to make a determination, such as it controlling a humanoid robot and "acting" human. This will fail miserably at first, but this will improve as well - convincing-looking hardware is available, even though control procedures aren't yet:
ReplyDeletehttp://www.youtube.com/watch?v=eZlLNVmaPbM
I see the question as less of when will a computer pass but how far will naysayers move the goalpost before people don't care and consider the argument irrelevant: "Am I bigoted against AIs? Why, no, in fact, some of my best friends..."