Desktop version

Home arrow Education

Information Processing

We typically think of information as being linked to meaning. In contrast to this, Pacific Bell engineer Claude Shannon (1916-2001) suggested, in 1948, a way of understanding information without meaning (Shannon 1948). Shannon worked on telecommunications systems and noted that we can separate the problem of transmitting a message from understanding its meaning. For many communication tasks, there are several different channels that can be used. Voice communication may go through multiple pathways, such as wireless connections, copper wire, coaxial cables, fiber optics, and satellite links. What matters for engineers are the signalprocessing implementations over these media, not the message semantics.

Shannon’s semantics-free information-processing approach is an overall framework for such processing. Let us take a closer look. Some messages can be more sparsely encoded than others. Why? Shannon thought about questions such as this after taking a leap of imagination—one that takes us into physics. He adapted the notion of entropy—the idea that physical systems have a tendency toward increasing disorder—for his own purpose of redefining the notion of information. Shannon took information to be a measure of entropy and took entropy to be a measure of randomness. This is important to our problem of transmitting messages. If a message is truly random, we need more bits to encode it than if it has order.

To understand this, think of a passport photo. The photo has lots of information. If you observe it under a magnifying glass, you will see colored dots that are not easily captured in a system description. Now think of a color drawing of the same size, featuring the Italian flag. It’s the same size but has less information because it’s made up of a simple pattern. You could communicate the whole thing by saying: first there is a green stripe, then a white stripe, and lastly a red stripe. All of the stripes are vertical and of equal size, and together they form an image measuring 5 x 5 cm. You cannot summarize the passport photo in such a simple manner. The photo, in Shannon’s parlance, contains a higher level of entropy and therefore more information.

Shannon information is proportional to the minimal size of a complete description of a system (a structure subject to systematic analysis), whatever it is—an Italian flag, a photograph, human DNA, a pile of rocks, or a conversation over the phone. In other words, the amount of information in a pattern is proportional to the size of the smallest algorithm capable of generating the pattern.

Shannon thus gave us the idea of a syntax, rather than semantics, of information. His work also suggested there could be thought without a thinker. To understand how, we need to go back further in time. Shannon built his early career on the work of British mathematician and logician George Boole (1815-1864). In 1854, Boole published An Investigation of the Laws of Thought, where he sought to capture human reasoning in a formal theory (Boole 1951). In the early nineteenth century, Aristotelian logic was still “the logic,” and it was commonly taken by intellectuals to be the sole vehicle of serious thinking. However, Boole saw the Aristotelian laws of thought as incomplete.

Boole aimed to create a fundamental science of the mind, based on mathematical logic. Human thought, he wrote, “traced to its ultimate elements, reveals itself in mathematical forms” and “the ultimate laws of logic are mathematical.” Aristotle did not see such mathematical foundations. Moreover, Boole demonstrated that Aristotelian logic could be reduced to his new logic and that it was more powerful and versatile.

Interestingly, Boole attempted neither to situate the mind within a larger metaphysical view, such as idealism or realism, nor to prove that his account of the mind is right. Being a mathematician, he came to understand human reason as an ideal mathematical capacity. Whether or not we accept his account of the reasoning mind is, according to him, a matter of mathematical intuition. When we have read and understood The Laws of Thought, it should be clear to us through mathematical intuition that its laws are valid, like other laws of mathematics.

Boole, like Shannon, avoids semantics and discusses mind in terms of formal symbols, rules, and operations. The reasoning mathematical mind is mechanical, without insight, initiative, or creativity. It follows formal laws and has no freedom. Boole also acknowledges the nonmathematical capacities of aesthetics, action, morality, sentiment, and emotion. But he does not provide any account of them.

We are to understand the reasoning mind as crucially depending on three opera- tions—AND, OR, NOT—and the values TRUE and FALSE, represented by 1 and 0, respectively. Shannon saw the intriguing fit between circuitry and Boolean algebra. He shows, in his master’s thesis of 1937—A Symbolic Analysis of Relay and Switching Circuits (Shannon 1937)—how electric circuits can implement the same laws of thought that Boole had sought for the mind. Shannon demonstrated how to systematically design and then optimize complex Boolean logic circuits.

If Boole had found the laws of reasoning, then circuits could now potentially think. Many thought Shannon’s work showed there could be intelligent machines, and Shannon came to pioneer AI. Soon researchers also started to think about the human brain as having complex Boolean circuitry. There seemed to be no borders between brains and electronics.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics