by stevejalim on 6/8/14, 12:34 PM with 134 comments
by skywhopper on 6/8/14, 2:47 PM
by mratzloff on 6/8/14, 2:32 PM
That the Turing Test is still used is proof that we still don't understand how to even define sapience. Without a definition and concrete, testable qualities, how can we possibly hope to ever build artificial sapience. As a result, we continue to see these toys that are little more than parlor tricks.
Any true test should include looking behind the curtain. "I know you're artificial--I can see the processes working--yet I have doubts that what I'm seeing is real."
In other words, real success is the tester believing he is being fooled when he is not, rather than fooling the tester into believing it is real.
by xpose2000 on 6/8/14, 3:27 PM
I asked him a few questions like where he lived, his name, if he has brothers or sisters, if he wears glasses, etc. Eventually he started asking me questions like what I did for a living and where I lived. He also managed to form questions based on my answers.
It almost felt like a conversation. I can honestly say, I've never thought that before while talking to an AI. So far I am pretty impressed.
by radio4fan on 6/8/14, 1:50 PM
I'm going to take this with a hug pinch of salt until I've read the transcripts, due to the involvement of famous publicity-hound Kevin Warwick.
by eli_gottlieb on 6/8/14, 2:04 PM
by thinkersilver on 6/8/14, 3:04 PM
Transcripts would be handy. I doubt a conversation with a 13 year old boy is a good way to measure AI? It's not the best metric to have but it is the most universal and most widely agreed on that we have. It seems like we are happier with small gains in mimicry for now, since real intelligence is hard. Really hard.
by fchollet on 6/8/14, 3:39 PM
A researcher claiming to have passed the Turing test instantly discredits himself as a prestidigitator looking for PR buzz. The present article is a textbook example of this.
As a side note, if you are focusing on disembodied, language-based human-like intelligence, then the paradigm you operate in is many decades behind. The Turing test was conceived at a time when the notion of thinking machines had just started to emerge --a very different time from today, where we have 60 years of AI research behind us. The Turing test has been irrelevant for longer than most AI researchers have been alive. I have never seen it used for anything else than smoke-and-mirrors PR operations.
by danbruc on 6/8/14, 1:43 PM
by Mithaldu on 6/8/14, 1:39 PM
by codeulike on 6/8/14, 4:32 PM
Judge: Did you hear about the Irishman who found a magic lamp? When he rubbed it a genie appeared and granted him three wishes. “I’ll have a pint of Guiness!” the Irishman replied and immediately it appeared. The Irishman eagerly set to sipping and then gulping, but the level of Guiness in the glass was always magically restored. After a while the genie became impatient. “Well, what about your second wish?” he asked. Replied the Irishman between gulps, “Oh well, I guess I’ll have another one of these.”
CHINESE ROOM: Very funny. No, I hadn’t heard it– but you know I find ethnic jokes in bad taste. I laughed in spite of myself, but really, I think you should find other topics for us to discuss.
J: Fair enough but I told you the joke because I want you to explain it to me.
CR: Boring! You should never explain jokes.
J: Nevertheless, this is my test question. Can you explain to me how and why the joke “works”?
CR: If you insist. You see, it depends on the assumption that the magically refilling glass will go on refilling forever, so the Irishman has all the stout he can ever drink. So he hardly has a reason for wanting a duplicate but he is so stupid (that’s the part I object to) or so besotted by the alcohol that he doesn’t recognize this, and so, unthinkingly endorsing his delight with his first wish come true, he asks for seconds. These background assumptions aren’t true, of course, but just part of the ambient lore of joke-telling, in which we suspend our disbelief in magic and so forth. By the way we could imagine a somewhat labored continuation in which the Irishman turned out to be “right” in his second wish after all, perhaps he’s planning to throw a big party and one glass won’t refill fast enough to satisfy all his thirsty guests (and it’s no use saving it up in advance– we all know how stale stout loses its taste). We tend not to think of such complications which is part of the explanation of why jokes work. Is that enough?
Dennett: "The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated, and multilayered system, brimming with “world knowledge” and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, and much, much more…. Maybe the billions of actions of all those highly structured parts produce genuine understanding in the system after all."
I'm sure they didn't get anywhere near this with their 13-yr-old simulation. But this gives an idea of the heights AI has to scale before it can regularly pass the Turing Test.
by VLM on 6/8/14, 4:56 PM
If a decade or so of social media (whatever that means) has proven anything, its that very little intelligence occurs in virtually all conversations.
The meta Turing test is being failed by many people who think it (a concrete implementation of it) means something. Much like actually building a well sealed box with a cat, a radioisotope source, and a geiger counter wouldn't actually be a "great step forward for Quantum Physics" in 2014. Any more than making a little anthropomorphic horned robot and having him divert fast "hot" molecules one direction or slow "cold" molecules another would be a great step forward for thermodynamics in 2014.
The value of a thought experiment is realized when its proposed, not when someone makes a science fair demonstration of the abstract idea.
by vixin on 6/8/14, 2:00 PM
by DanBC on 6/8/14, 1:49 PM
Surely it depends on who the human judges are. It seems a bit unfair that the judges normally have IQ > 100 and the other humans have IQ > 100.
I strongly suspect that some simplistic AI (alicebots, for example) would beat the Turing test if the human judges had IQ between 90 and 105. (Especially if we're using the limited 30% rule above).
Getting bots running on some Facebook groups might be interesting.
by monochr on 6/8/14, 3:05 PM
In short they did nothing.
by ma2rten on 6/9/14, 2:26 AM
There is actually more than one bot, which has been claimed to have passed the Turing Test before. Cleverbot is one of them [1]. There are also several competitions, but I believe the most reputable and long standing one is the Loebner Prize [2]. The bot that currently holds the Loeber Prize is Mitsuku [3].
Anyway, you can chat with Eugene at [4], I gave it a try. I believe there is one thing that the creators of Eugene got right. When chatting with other chat bots, I usually in a situation where the bot says something, I ask I followup question (like "Why?"), and it gives a generic answer like "Because I say so" or "I don't know". Eugene does the same but will ask a unrelated followup question right together with the response. That way at least there is not a weird pause in the conversation.
[0] http://en.wikipedia.org/wiki/ELIZA
[1] http://www.geekosystem.com/cleverbot-passes-turing-test/
by scotty79 on 6/8/14, 1:38 PM
by inetsee on 6/8/14, 3:26 PM
There are lots of other domains where I would be entirely happy to know that I was talking to an AI, if the answers I was getting were significantly better than most human experts in that domain.
by jestinjoy1 on 6/9/14, 3:20 AM
SO the result can very depending on different conditions. :) Highly non deterministic
by sfbsfbsfb on 6/9/14, 1:34 AM
This is not passing the Turing test by any stretch of the imagination.
by Bayesianblues on 6/9/14, 5:00 PM
Surely the ability to trick a human into believing an AI is a human is a milestone, but it was with an AI specifically optimized for this task. The deeper question is if the passing of the Turing Test in this case means we should ascribe consciousness to the bot, and I think none of us are willing to affirm it yet. I would suggest that this discrepancy is caused by the “measure becoming a target” and losing its ability to be a “good measure.” I guess this is why there is such a critical distinction between Artificial Intelligence and Artificial General Intelligence, which is where the Turing Test would have more weight.
by e12e on 6/8/14, 10:40 PM
(And this is extended from another form of the imitation game, where the goal is to imitate being male, where participants are male and female)
Have anyone been able to find any more concrete information (and perhaps some transcripts)? If not I hope someone will set up a new test, and invite "Eugene" to participate.
[1] http://arstechnica.com/information-technology/2014/06/eugene...
[edit: We may be given some hints from the wikpedia article on the turing test: https://en.wikipedia.org/wiki/Turing_test#Imitation_Game_vs....
"Huma Shah and Kevin Warwick, who organised the 2008 Loebner Prize at Reading University which staged simultaneous comparison tests (one judge-two hidden interlocutors), showed that knowing/not knowing did not make a significant difference in some judges' determination. Judges were not explicitly told about the nature of the pairs of hidden interlocutors they would interrogate. Judges were able to distinguish human from machine, including when they were faced with control pairs of two humans and two machines embedded among the machine-human set ups. Spelling errors gave away the hidden-humans; machines were identified by 'speed of response' and lengthier utterances." ]
by PythonicAlpha on 6/9/14, 1:33 PM
I would also add, that the real prove must include a topic that the artificial person was not programmed for. (not like a Bayesian filter that "develops" by "learning" new facts about a fixed topic).
Learning, developing, evolving, that are the real marks of living and of intelligence (since, I would not part between intelligence and living).
by draq on 6/8/14, 2:07 PM
by testingit on 6/9/14, 8:48 AM
Speaking is only a way of getting into the stage. Once into the highlights you must prove you are a leader or, if you decide so, that you are able to gain the attention of your audience to emphasize something important that previously was not perceived as such. That is speaking is an art, is not about explaining a plot but about creating a story.
by Morendil on 6/8/14, 4:57 PM
by muglug on 6/8/14, 2:38 PM
by alt_f4 on 6/9/14, 9:19 AM
Move on people, it's just a cheap PR stunt.
by ilaksh on 6/8/14, 5:22 PM
But people will continue to dismiss the state of the art and deny that computers have "real" intelligence, the same way they did when the computer defeated Kasparov, the same way they did when we saw Googles self driving cars, the same way they did when a computer won on Jeopardy, and now with the Turing Test. Even when we have robots that look and act exactly like humans, many people will say that they are not "really" intelligent and dismiss the accomplishment. They will still be saying that when AIs twice as smart as people arrive and they have to figure out what to do with billions of what will then be, relatively speaking, mentally challenged people.
by SilasX on 6/8/14, 5:47 PM
So how long until the creator can pass the Turing Test?
(Normally I'd ignore that, but given the subject matter...)
by dreeves on 6/8/14, 8:53 PM
by grondilu on 6/8/14, 3:13 PM
by netcan on 6/9/14, 10:57 AM
by cwhy on 6/9/14, 1:38 AM
by AndrewKemendo on 6/8/14, 3:44 PM
by raldi on 6/8/14, 1:38 PM
by lotsofmangos on 6/8/14, 11:44 PM
by vonsydov on 6/9/14, 1:20 AM