A philosophical zombie is a thought-experiment in the philosophy of consciousness. Suppose you encountered a person indistinguishable in every respect from a normal human being, except it lacked consciousness. That’s a philosophical Zombie: Looks and acts like a person, but “nobody is home.”  If Mr. Z. was poked with a sharp stick he would say “Ow!” but would not really experience any pain because he has no experience.

Fun. But how do you know that’s not the world we live in?  You know your own personal experience, but for everyone else, you assume similarity – you don’t really know, because consciousness is private.  You have to admit it is logically possible that nobody but you has experience. All the rest of us are zombies!

The thought-experiment is important because if a p-zombie is physically identical to a person but without consciousness, then consciousness must not be a property of the physical world.

Materialists object that if a zombie had the ability to speak, it would not be a zombie. Speaking is speaking about experience. Therefore, there is no such thing as consciousness separate from the physical body.

These are some of the deeply puzzling questions behind the Turing Test (designed by Alan Turing in the 1950’s) which was supposed to show if a computer could be made genuinely conscious.  If a judge held a conversation with a talkative being hidden behind a curtain, and at the end, could not tell if it was a computer or a person, then the computer would have passed the Turing Test.  It would be perverse, Turing argued, for the judge to insist the computer was not intelligent. That would be pure bias.

(Turing’s test was more complicated – I have simplified).

In June of 2014, a computer program named Eugene Goostman finally passed the Turing Test. The test was conducted by judges at the University of Reading in England. Eugene convinced 33% of the judges that they had been talking with a human. That’s not a perfect score but that was the pass-fail mark and the computer passed.

So did that settle the argument about whether computers could be conscious?  Of course not. Critics said the Turing test was stupid. For one thing, it requires the computer to “fake” being a person, which is not a normal aspect of consciousness. We don’t go around pretending to be anything other than human, so it’s unfair for the computer to have to fake it.  The computer’s main job is to deceive the judges. But a person’s main intent in a conversation is not deception, so again, the whole test is biased against the computer, critics say.

So a new test was designed, the Winograd Schema Challenge.  Instead of a free-flowing conversation with the computer (or person), the judge would present carefully designed questions. Here are two typical questions:

  1. City council members refused to give the demonstrators a permit because they feared violence. Who feared violence?
  2. City council members refused to give the demonstrators a permit because they advocated violence. Who advocated violence?

The answers can be scored objectively right or wrong and do not depend on a judge’s opinion. Humans have no problem answering correctly, but computers do because answers require common-sense knowledge.

At the first official Winograd Schema Challenge, held in New York on July 12, 2016, no computer won the $25,000 prize. The best computer scored only 58%. To pass, 90% was needed.  (Reference: http://commonsensereasoning.org/winograd.html)

In my first psi-fi novel, The Newcomer (unpublished, 2016) the main character, Allen Bland, is an ordinary guy, a happily married software engineer in Seattle. But gradually he discovers he’s an advanced AI android. He’s so good that when fellow engineers finally believe his story, they want to capture him and take him apart to see how he works. He flees, and tries to find out who made him and why.

Eventually Allen is cornered but makes a deal with his main antagonist. If he can pass a public Turing test, she will leave him alone. He is sure he is a sentient being, every bit as intelligent as any human. She is completely confident that no computer can pass a stringent Turing test. A deal is struck.  There’s a big dramatic scene as the Turing test takes place.

No spoilers here. I hope someday to publish that novel. But I wondered if I should change the test to a Winograd Schema Challenge.  It would make the story more up-to-date, and maybe I’ll do it for that reason, but it really wouldn’t make any difference. I could just give Allen a hook-up to a vast database of common sense knowledge like CYC, and the Winograd Schema Test would be neutralized.

The fundamental issue would remain: Is Allen a philosophical zombie? Is there any way to really determine it, or, perhaps, is the question ill-formed?