The Turing test is one of those snippets of technological folklore that, every so often, triggers a flurry of media excitement. Wild claims are made, dubious headlines are run up the flagpole, much fun is had by all.
Partly, no doubt, because it’s beautifully simple to explain. The idea was articulated by British cryptographer Alan Turing, back in 1950. Turing posed the simple question: could a machine think? Being a no-nonsense chap with a distaste for definitional wrangling, he illustrated his question with an objective test. Take one human judge. Have the judge communicate with two subjects. One is a regular-issue human being, the other is a machine. The trick is, the judge doesn’t know which is which. The subjects are in a physically separate location to the judge, and can only communicate via text message. The judge’s job is to detect the human, by posing any questions they see fit.
Turing proposed that, if the machine could fool a competent judge some significant fraction of the time, say 30% (where a real human would ‘out-human’ another human on average 50% of the time), then that would be a thinking machine. The machine would have to be able to ‘do’ anything, intellectually speaking, that an ordinary homo sapiens could do.
Is it possible for a machine – a computer – to think as a human thinks, in 2014? No. Sorry. It probably won’t be in 2024, either.
So whither all these excited headlines that a computer program has ‘passed the Turing test’?
Well, I dunno. If you could really answer that question, you could say something interesting about how the media works. People want excitement; the news is there to oblige. If you’re interested in excitement and accuracy, sometimes you’re out of luck.
More concretely: there’s a recent history of competitions organised around a fairly loose interpretation of Turing’s protocol. The entrants are chatbots, of the sort everyone’s spent a few minutes playing with at some point. They take input sentences, and use a variety of strategies to try to produce a plausible response, occasionally succeding. ‘Sentience’ is not one of the strategies. The competition is usually extremely restricted in order to give the programs a fighting chance, eg by keeping conversations short and confined to a particular topic, and even by coaching the judges to ‘play along’ and not ask the sorts of questions that would immediately expose the computer as a computer.
Winning these competitions has little to do with ‘thinking.’ They may not even be a good test of the ability to semi-convincingly fake human speech. The victorious programs seem to often be those that are cleverest at gaming the rules. The winner of the inaugural Loebner prize used ‘whimsical conversation’ as its conversation topic. ‘Eugene Goostman,’ the program that caused the recent stir, alleged itself to be a 13 year-old non-native English speaker. The trick, apparently, is to have an excuse for being incoherent and unable to answer a straight question.
So what’s the moral here, apart from ‘don’t trust the media’? One message might be that the Turing test is better regarded as a thought experiment. Alan Turing was trying to get at what it might mean to say a computer could ‘think.’ If a computer could make a human believe that it could think, then… well, we would have as much grounds to believe that it can think as we do of believing that any human other than ourselves can think. This is a contentious claim, and philosophers have since had a great time contending it, but it remains a credible position.
Turing acknowledged that his proposed test was sufficient, but not necessary. That is, it’s quite possible that a machine might be able to think, but still be unable to pass the test. (One can demonstrate this trivially by imagining a human being unable to pass the test, eg someone who didn’t speak the language the test was conducted in.) It seems quite conceivable that the first real thinking machines, whenever they appear, will also fail the test, will be ‘recognisably non-human.’ After all, human thinking patterns are already an abundant natural resource. Why would anyone go to great labour and expense to create a digital sentience, just to make it think like a human?