Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Hardware and Software: Their Roles and Characteristics

Computer operation is a joint venture of both software and hardware. Any operation being performed by the software can also be directly built into the hardware. On the other hand, any instruction executed by the hardware can also be simulated in the software. In fact, hardware and software in a computer system are logically equivalent. To what extent the hardware will be involved and the software will participate in any operation is a major criterion at the time of designing and organizing a computer system. The decision in this regard that dictates to put certain functions in hardware and others in software is mostly determined by a number of factors such as speed, reliability, cost, target user group, frequency of execution of such functions, its expected rate of modifications and enhancements, and similar many other factors. There exist as such no prescribed boundary lines between the domain of hardware and the region of software, and there exist also no hard and fast rules about the effect that this must go into the hardware and that this must be explicitly programmed to place into the software. Designers and architects having different objectives and targets may often exercise different decisions to fulfil their requirements in order to reach their desired goals.

Computers at the early stage had a clear distinction between hardware and software. Frequently carried-out operations were all executed at the low level (hardware). The emergence of microprogramming concept (described in Chapter 6) and later its successful implementation indicated the dominance of reverse trend. What was previously executed at the hardware level, say, an ADD operation, is now implemented at a relatively higher level, i.e. the microprogramming level in which a microprogram interprets an ADD instruction which is carried out in a series of small steps. This once again emphasizes the fact that there are not any such hard and fast rules which can govern that what must be in the hardware and what should be kept in the software. No such partition line, however, physically exists in between.

Further technological advancement once again influenced the design and architecture of computer system that insisted it to again swing back to the previous concept: the operation is to be executed at the lowest level, i.e. at the hardware level. In fact, the boundary as perceived between hardware and software is constantly changing and apparently looks like an arbitrary one. Today's software may be tomorrow's hardware. This is observed in today's embedded system, where software is assumed to be embedded in hardware. The architecture of the computer system when viewed from the angle of level concept supports this fact once again where a low-level software is found very close to the hardware level. The ultimate outcome of this continuous back-and-forth swing in philosophies led to the eventual emergence of two different schools with two distinct and different approaches: the CISC (complex instruction set computer) architecture using microprogramming approach and the RISC (reduced instruction set computer) architecture using hardware approach.

Evolution of Computers: Salient Milestones

The birth of today's computer is not an isolated or sporadic event. It has been developed from many sources all over this globe and has been gradually moderated and fine-tuned with the invention, discovery, and development of the other related areas in electronics and electrical and mechanical engineering. History of computations leads us to the origin date back to a period as early as 2000 BC. Greeks and Romans at that time used a kind of calculating device known as abacus (a Latin word meaning flat surface) that consisted of stones manipulated on the flat surface to perform relatively sophisticated decimal arithmetic (base 10) calculations with digit-by-digit carry propagation since its storage capacity is limited to one digit. The abacus was invented in China in pre-Christian times, sometimes around 500 BC, and is thought of as the earliest mechanical computer.

The regime of mechanical computers, the zero generation, is assumed to have its origin date back to the beginning of seventeenth century with numerous remarkable innovative machines that came from many scientists and technocrats throughout the world over a considerable duration of time. All these machines were built up mainly with mechanical components, such as rotating cogwheels, and gears etc., and later electrical parts were incorporated into these machines.

The Generation of Computers: Electronic Era

The sensational invention of vacuum tube by Lee Dee Forest in 1906 has radically changed the computer world when vacuum tube came in use in the realization of computer system replacing the traditional concept implemented by electromechanical components. Electronic computers have then gone through many inventions and innovations during the past seven decades depending mostly on the outcomes of electronics advancements and its related areas that were successfully implemented in the design and realization of computers. These progressions, however, have been divided and categorized in terms of generation. A new generation is declared only when a sharp breakthrough in the existing technology of both hardware and software is observed. New generations, however, have introduced new hardware and software technologies, and also inherited all the important features of its previous generations.

The first-generation computers (1945-1954) are demarcated with the use of vacuum tubes and relay memories interconnected with insulated wires, requiring much air conditioning due to awfully heat generation while in operation, and are very slow. Representative system of this generation was ENIAC (Electronic Numerical Integrator And Calculator) developed in 1946 by John Mauchley and Presper Eckert at Moore school, University of Pennsylvania, with Dr. Von Neumann as a consultant of this project. Later, EDVAC (Electronic Discrete Variable Automatic Computer), the first stored-program computer, was built by the Moore School between 1947 and 1950 based on Neumann's idea, and it became operational in 1951.

Von Neumann Architecture: Stored-Program Concept

Following the success of ENIAC and later of EDVAC with stored-program concept, Dr. Von Neumann and his colleagues A.W. Burks and H. H. Goldstine published a series of papers from 1946 to 1948 that for the first time clearly described and set up the basic architecture and organisation of a general-purpose computer system, including its logic design and programming aspects. The design principle as proposed was so fundamental in concept that it is still considered to be quite modern in its formation. The concept as ventilated, however, had a long-bearing and a far-reaching influence, mainly due to its ideal organisation of the basic resources (CPU, main memory, and I/O devices) along with the implementation of stored-program concept that eventually made the computer fully automatic (Burks, A. W., H. H. Goldstine, and J. Von Neumann.). All the computers in this globe were subsequently designed and developed later, and by and large, even modern computers of today mostly follow the basics of this principle. Figure 1.2 shows the typical architectural design of computer, which is derived from Neumann's idea and consists of the following components:

  • 1. A CPU comprises arithmetic-logic unit (ALU) and a centralized program control unit for sequencing and executing the instructions given by the user;
  • 2. A main memory unit stores information in the form of instructions and data;
  • 3. An I/O unit consists of secondary memory units, information feeding unit (input), and result-receiving unit (output).

One of the notable features of this design was the placement of I/O systems, which were kept outside the core domain of the proposed form.

Neumann's concept as just described envisaged the computer design to be with a centrally located single memory for storing both program and data. The proposed Von Neumann architecture is sometimes also known as Princeton architecture.

In contrast, another well-known concept in the computer design is what is popularly known as Harvard architecture that visualizes the computer design consisting of separate memories for storing program and data individually. Program memory and data memory can even be of different widths, type, etc. Program and data can be simultaneously fetched from these two different memories in one cycle, by separate control signals, namely "program memory read" and "data memory read". Implementation of such an architecture, e.g., includes Harvard Mark I computer.

Dr. Von Neumann along with his colleagues then developed a new computer system based on their proposed design (1946-1948) using fundamental resources (CPU, memory, and I/O devices), popularly known as MS computer at the Institute of Advanced Study (IAS), Princeton University.

FIGURE 1.2

Basic design of Von Neumann machine.

1.5.1.1.1 Limitations: Von Neumann Bottleneck

In Neumann's proposed design, the relatively slower single memory centrally located for storing both program and data to be sequentially accessed by faster CPU consumes a huge amount of time to move instructions and data between CPU and main memory, and also, to a lesser extent, between main memory and I/O devices. The reason is that the existing speed disparity between the faster CPU and the relatively slower main memory causes the CPU to be remained idle over a longer duration for the data to arrive from the slower memory. Moreover, as technology gradually advances, the speed of the CPU has spurted enormously, while the speed of the memory also increases but quite moderately. As a result, the speed disparity between CPU and memory also constantly increases with passing days, which now seriously causes a critically notable degradation in the overall system performance. It appears that the technological advancement here turns out apparently as a curse and not a bliss, at least in this regard.

In addition, the program and data were rather mixed in a single memory (device) in Neumann's machines, requiring strictly sequential accesses, and were not allowed to store them in separate memory areas (or devices) that could otherwise made then possible to simultaneously access program and data in parallel. This sequential-access approach is used to consume more time to complete an execution as instruction and data are normally accessed one after another. All these together gave rise to such a crucial situation at the time of designing standard computers (Neumann concept-based) that it is sometimes referred to as Neumann bottleneck.

For more details, see the website: http://routledge.com/9780367255732.

 
<<   CONTENTS   >>

Related topics