The Grand Locus / Life for statistical sciences

Subscribe...

Share...

The autistic computer

I was the shadow of the waxwing slain
By the false azure in the windowpane

What did Vladimir Nabokov see in the first verses of Pale Fire? Was it "weathered wood" or "polished ebony"? As a synesthete, his perception of words, letters and numbers was always tainted with a certain color. Synesthesia, the leak of a sensation into another, is a relatively rare condition. It was known to be more frequent among artists, such as the composer Alexander Scriabin or the painter David Hockney, but it turns out that it might also be frequent among autists. This might even be the reason that some of them have a savant syndrome (a phenomenon first popularized by the movie Rain Man).

One of those autistic savants, Daniel Tammet explains in the video below how he sees the world and how this allows him to carry out extraordinary intellectual tasks.



In his talk, Daniel Tammet explains how he performs a multiplication by analogical thinking. Because he sees a pattern in the numbers, he gives the problem another interpretation, another meaning, where the solution is effortless. This would happen at the level of the semantic representation (i.e. when the brains deciphers the meaning of the stimulus) and not at the level of the stimulus itself. Daniel Tammet's eyes are probably like yours or mine, the difference happens further downstream of the sensory process. However powerful, Tammet's method is far from what a computer would do. For the computer, numbers have no meaning, they are only the input of an algorithm. Because the conscious brain is able to follow algorithms, we often assume that it is also how it works, deep inside. Because the brain has a computer, we sometimes believe it is a computer. But this is fogetting that meaning is a computation.

In the posts The elements of style, and Are you human? I briefly talked about the Turing test, which says that a machine can think if it can express itself in a natural language. Vividly opposed to this idea, the philosopher John Searle proposed the Chinese room thought experiment to demonstrate that the Turing test may have nothing to do with thinking, or at least understanding. The experiment goes as follows: imagine a computer that would pass the Turing test for Chinese by following a program (i.e. an algorithm). The relevance of the answers would be such that native speakers could not tell the difference with a human. If the program is written in English, Searle says, he could lock himself up in a room with it, and by carefully following the instructions he would also pass the Turing test. However he does not speak Chinese. It is fair to say that neither does the computer, because it does the same thing as him.

So, according to John Searle, a machine passing the Turing test can be said to understand Chinese in one of the following three cases:

  1. A non Chinese speaker cannot do what the machine does.
  2. A non Chinese speaker can do what the machine does, but gets different results (and fails the Turing test).
  3. A non Chinese speaker who does what the machine does becomes a Chinese speaker.

I have to admit that I first did not find the Chinese room argument convincing. To me there was no reason to believe that a brain could do something that a computer could not. After all, the brain consists of neurons which are relatively similar to the transistors used in computers (for instance the NOMFET technology is a transistor that mimics neuron plasticity). However, the case of autistic savants does suggest that brains can do things that computers cannot. Perhaps this idea is wrong, but if there is any truth in it, we should try to understand these strange ways of solving problems, and use them. Have computers use them.

The key to this strange computations is to be able to switch context, to transform the problem into another one and then back. We could even call this the "Tammet test". There is no doubt that computers will one day pass the Tammet test (and as a consequence they will most likely pass the Turing test without raising the Chinese room objection), but what do we have to change in their design? What is this thing that makes the difference between a computer that understands meaning and a computer that does not?


« | »



blog comments powered by Disqus

the Blog
Best of
Archive
About

the Lab
The team
Research lines

Blog roll
Simply Stats
Opiniomics
Ivory Idyll
Bits of DNA