Desktop version

Home arrow Engineering arrow The dark side of technology

Source

Writing information to computers

Early attempts at writing to store or send information, whether via cuneiform, pictograms, or later scripts, were all simplistic. The difficult part—interpretation and information processing—was made by a magnificently powerful and complex item, namely the brain. With numbers, various systems were used. For example, we could count up to ten using our fingers. Larger numbers were merely how many blocks of ten. For simplicity, adding a few key symbols reduced the number of marks that were required. So Roman numerals used symbols such as I, V, X, L, C, D, and M to define a unit set of 1,5, 10, 50, 100, 500, and 1,000. The system is quite precise, but initially it was lacking a zero, and certainly it is not ideal for multiplication and division. Just try multiplying, say, 497 by 319 using only Roman numerals. Division is worse!

Many other possibilities exist, so instead of using blocks of 10 (from fingers) we could use just two (e.g. the two hands). This is called binary counting, so 0, 1, 2, 3, etc. become 0, 01, 10, 11, etc. Although this is not as easy to read for humans, it is actually far more practical for electronic computers, as the simpler electronic systems effectively have only two conditions: off or on (i.e. the 0 and 1 states).

Since computers lack intrinsic intelligence, we have to give them sets of instructions on how to make different operations, whether word processing or computation. This is highly tedious but, once instructed, they can perform calculations far faster than we can, so the time spent in the software writing is worthwhile. Instructions in terms of computer input therefore have to transform our letters and numbers into a code that is a mixture of on and off conditions. This is simple, but actually it applies to many other situations.

For example, in mechanical weaving looms, the machine has to be told to make, or not make, mechanical movements so as to create a pattern, or to change thread colour. These coded instructions were first made in the eighteenth century by Jacquard of Lyon in France. This technological advance speeded up weaving but, as in such advances, it changed the employment and habits of earlier generations, and the manual skills were lost at the expense of mass production.

The idea was improved by Herman Hollerith with a patent in 1889 for punched cards. Moving forward a century, it was seen to be compatible with computer technologies. So by the 1950s, computer inputs were encoded on punched cards in the style of the Hollerith cards. For the Hollerith cards, one wrote all the information that was to be processed as a pattern of holes on a stack of cards, and the software instructions were on another pack. It was slow, but so were the computers, which occupied vast areas with air conditioning to cool the vacuum valves (also called vacuum tubes). Rather more rapid access appeared with the advent of punched paper tape readers. Reading speeds improved, but corrections to typing errors on the tape were tedious. These writing methods were state-of-the-art but doomed to extinction; they each became outdated within ~20 years.

Replacements came in the form of magnetic tapes. Advantages included much higher feed speeds into the computer and easy corrections of errors; with care, an individual tape might last ten years. This is probably a generous estimate, as the tapes stretch and become corrupted if used frequently, so key data would be transferred onto new copies (copying errors exist but hopefully they are minor). Tape technology has advanced in materials, capacity, speed, and processing machines, and it is still used throughout music recording to make the master copies. It is also the norm for data storage in particle physics. This seems likely to continue, but the big caveat is that old tapes may no longer be readable, as the tape writing formats and sizes, as well as the tape readers, have evolved and changed. This is no different from saying cars have been around for 100 years, but the similarities between those in the early twentieth and twenty-first centuries are limited.

By 1984 (the year, not the George Orwell book), CD writing and data storage emerged, particularly designed for music to replace vinyl discs. To explain the reasons for this change, it is useful to look at the pattern of music recording. Most people will assume that music was not relevant for advancement of modern technology, but this is not true. It was actually the desire to record music that was a prime factor in the development of basic electronics. From this objective came microphones, amplifiers, and speakers, which enabled radio broadcasting, and all the subsequent expansions of all other types of electronics.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics