Desktop version

Home arrow Education

Turing Machines

In the late 1930s, Turing investigated computational limits, using abstract machines. A Turing machine has three components: a table, a tape with squares, and a movable head for printing or erasing symbols in the squares. The table has instructions for how the head moves, prints, or erases symbols. Turing machines can switch state, as indicated below in the instruction table for a machine that adds two positive integers, symbolized by a corresponding number of dots.

If the machine is in state 0 and scans a dot, it moves right. If it is in state 0 and scans a blank, it prints a dot, moves right, and changes to state 1. If it is in state 1 and scans a dot, it moves right. If it is in state 1 and scans a blank, it moves left and changes to state 2. If the machine is in state 2 and scans a dot, it erases it and goes into the STOP state. The following illustrates how a Turing machine with this instruction table adds 2 + 2.

Turing suggested that for any computable function—any function that can be calculated using an algorithm—there is a Turing machine that can compute it. He also suggested that there is a universal Turing machine capable of simulating any other Turing machine—a master machine that can compute anything that is computable. A human computer and a universal Turing machine share a capacity for being able to perform an open-ended set of computations by following algorithms. Turing contemplated whether there might be nothing more than this to intelligence. Could it be that the neural networks of brains are instantiating universal Turing machines?

Ahead of McCulloch and Pitts, Turing set out to analyze the computational capabilities of the brain and constructed the first neural network model with a “learning” capacity. In his “unorganized B-type model,” connections have modifiers with two input fibers. When a pulse is received on the so-called interchange fiber, the modifier switches to interchange mode. This means it inverts signals—“1” becomes “0” and “0” becomes “1.” The other fiber is called the training fiber. When a pulse is received on this fiber, the modifier switches to interrupt mode. In interrupt mode, the modifier puts out a “1” regardless of the input.

According to Turing, a sufficiently complex B-type network can be configured to perform any task a universal Turing machine can perform and vice versa.[1] Turing suggests in “Intelligent Machinery” (Copeland 2004, p. 424) that the cortex of an infant is an unorganized machine and that learning is an organizing activity driven by reward and punishment (pleasure and pain). It is through learning, training our neural networks, that our brains mature into universal Turing machines.

Turing pioneered bioinspired approaches to AI with his work on neural networks. However, his work was initially ridiculed and ignored. Not until recently has the significance of Turing’s work, within this context, been given broader attention (Copeland and Proudfoot 1996).

The field of neural networks burgeoned instead with the work of Frank Rosenblatt in the late 1950s (1928-1971). However, Rosenblatt faced a tough time when Marvin Minsky (1927-2016) and Seymour Papert (1928-2016) attacked his work (Minsky and Papert 1969) by focusing on a single, relatively unimportant detail. The disheartened Rosenblatt never responded and his connectionist dreams perished when he drowned in a boating accident in the same year. Without Rosenblatt, AI came to focus on the symbolic approach to artificial intelligence—Minsky’s and Papert’s preferred choice.

The symbolic approach was further strengthened in 1975 when AI researchers Alan Newell (1927-1992) and Herbert Simon (1916-2001) jointly received the Association for Computing Machinery (ACM) Turing Award and presented a paper outlining the current state of the art in AI and future challenges. The central thesis was that intelligence is based on symbolic computation (Newell and Simon 1976). Newell and Simon suggested that AI should focus on symbol manipulation at the information-processing level, leaving neurobiology to the side.

The goal was programming intelligence—creating minds out of software. This is the thesis of what John Searle (1932-) calls strong artificial intelligence or strong AI. According to strong AI, the relationship between mind and brain is like the relationship between software and hardware. In the strong AI view, the human mind is software, which happens to be running on neurons. Strong AI was challenged by Searle in an influential thought experiment called the Chinese room (Searle 1984, pp. 28-41).

  • [1] Turing claims to have a proof for this in his 1948 article “Intelligent Machinery” (Copeland 2004,p. 422).
 
Source
< Prev   CONTENTS   Source   Next >

Related topics