Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't forget that being too good can also cause you to fail the Turing test. I would not expect a human to be able to answer some questions that may be trivial to a computer. Things like, "What's the square-root of 137?" Or, "Identify this obscure song within 5 seconds of listening from a random starting point."


I suppose that depends on the precise setup of the test. Is the subject (if they're a human) allowed access to a calculator or a computer with an internet connection? Even if they were, timing would be an obvious tell, but an AI could easily be programmed (or could learn) to introduce an appropriate delay.


The idea behind the Turing test isn't whether they can arrive at the same answer given the tools. The idea is whether a human can tell the difference. I would expect any human to answer with something like, "I don't know." Or, "Let me find my calculator..." Either answer would be a lie for a computational AI -- it would know the answer and not require a calculator.

This is, I think, one of the failings of the Turing test. It's easy enough for us to make new humans; making a machine that acts exactly like a human seems like a silly endeavor. I want a machine that can assist us and reinforce our failings. Which means that we can necessarily differentiate it from another human. I vastly prefer that over a machine that has learned to lie to us.


> The idea is whether a human can tell the difference.

I know, that's why I'm saying the precise setup of the test is important. That's why, for instance, it's usually presented as a text messaging setup, because the goal is to test intelligence via language and conversation skills. A "face to face" setup wouldn't make much sense, unless we wanted to test that aspect of robotics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: