Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Evolution of Operating System and System Software: Their Roles

The basic concept and the definition of the OS together with the definition of software and its broad classification into two distinct categories, namely application software and system software, including utilities, have been previously discussed. The role of the software in, and its contribution to, creating a versatile computing environment with the available hardware has already been narrated in brief. The relationship that exists in between this software and the underlying hardware through the OS as well as their relative locations with respect to the hardware is shown in Figure 1.1. In fact, the forms and designs of the OS mostly depend on, and are determined by, the underlying architecture of the computer system that will be ultimately driven by OS. Consequently, there exists a one-to-one inseparable relationship between the OS and the underlying computer architecture. That is why, with the continuously evolving and constantly advanced innovative hardware architecture, the designs and issues of OS have also been constantly reviewed, redefined, and redesigned, and thus progressed by this time through gradual evolution, generation- after-generation, for the sake of staying to be exactly matched with the underlying continuously evolving advanced hardware technology. Language processors, assemblers, loaders, database systems, library routines, debugging aids, application-oriented modules, etc. although are not included within the domain of the OS, but also have equally progressed by this time, and they are on a par with the continuous advancement of OS, because they are always using the facilities provided by the underlying system (see Fig. 1.1). In fact, the performance of the OS sets the stage for the concert of the hardware and associated software constituting the entire computing environment.

Initially, the first-generation computers had no OS (zero-generation OS), but by the early 1950s, punched card has been introduced and the first-generation OS has been implemented. There was hardly any system software support compared with today's modern machine. By the mid-1950s, with more powerful second-generation computer system, the system programs in the form of utilities have been developed for most frequently used functions to manage files and to control I/O devices. The second-generation OS, known as the batch system or also called the monitor, was then introduced along with assemblers and compilers (FORTRAN). Job control language (JCL) has been developed, which was required to initiate the OS at the time of job submission. The third-generation computers could be considered as a major breakthrough due to their relatively advanced architecture and design with more powerful emerging hardware technologies and facilities equipped with more intelligent befitting third-generation OS that offered initially multiprogramming, which further upgraded to interactive multiprogramming to enable the user to interact with the system through the terminals during runtime. This OS has been once again upgraded to the multiuser system so that multiple users could now work simultaneously with their own different jobs in this centrally located single-processor hardware using some sort of computer terminals attached with this system. Another variant of this type of OS is the multiaccess system that allowed simultaneous access to a single program by multiple users (not multiuser). This approach, however, opened an important line of development in the area of OLTP, such as railway reservation system and banking system, where users could efficiently execute queries or updates against a database through hundreds of supporting active terminals under the control of a single program. Inclusion of virtual memory and subsequently addition of cache memory in the hardware system, however, have once again modified the architecture and the design of the existing machines, and consequently, the forms and designs of the existing OS too were accordingly upgraded to handle the changed situation. Such modified OSs were DOS/VS and OS/VS ("VS" stands for virtual storage), which came from IBM.

Modern Operating Systems

Over the years, a gradual evolution of more advanced hardware organisation and architecture emerged due to rapid pace of technology with the key hardware outcomes such as multicomputer systems, multiprocessor machines, sophisticated embedded systems, real-time systems, high-speed communication channels used in network attachments, and increasing the size of more speedy and variety of memory storage devices, which all greatly increased the machine speed as a whole. In the application domain, the introduction of more related intelligent software, multiuser client-server computing, multimedia applications, Internet and Web-based applications, applications using distributed systems, cloud computing, real-time applications, and many others have made an immense impact in the structure and design of evolving OSs. To manage these advanced machines and upcoming sophisticated environment, OSs also have been thus constantly progressed, thereby conceiving new thoughts and approaches, and formulating and incorporating a number of new design elements to organised the OS afresh that ultimately culminated in a major change in the existing concept, forms, design issues, structure, as well as in its nature. These modern OSs (either new OS or new releases of existing OS), however, properly fit and can adequately manage new developments in hardware to extract its highest potential, are also conducive to new applications, and befit to negotiate increasingly numerous potential security threats.

That is why, beyond the third generation of OS along with continuous releases of its enhanced versions, making any sharp demarcation between generations of OSs is seemingly difficult to realize, and in fact, there is a less general agreement on defining generations of OSs as such. Consequently, the classification of OSs by generation to drive this constantly changing environment becomes less clear and less meaningful. It could be summarily said that the scientific, commercial, and special-purpose applications of new developments ultimately resulted in a major change in OS in the early 1980s and that the outcomes of these changes are still in the process of being worked out. However, some of the notable breakthroughs in approaches that facilitated redefining the concept, structure, design, and development of OS to realize numerous modern OS to drive the constantly emerging advanced architecture in the arena of both scientific and commercial uses are being mentioned here.

Interactive single-user PC was initially driven by PC-DOS and then by MSDOS, which later included Windows as an added platform to incorporate GUIs. Finally, Windows NT-based OS, a full-fledged stand-alone and more advanced OS, was ultimately launched. Introduction of computer networks deserved a different type of OS to manage the network of individual computers known as the network operating system (NOS) or sometimes called the network file system (NFS), apart from the OSs (similar or dissimilar) that are separately installed in the individual member computer systems to manage them. Client-server (or workstation-server) model uses another widely used variant of NOS, a more sophisticated and complex OS, to manage all the resources as a single entity extracting their highest potential, known as the multiuser interactive OS that offered coarse-grained distribution. This approach summarily exhibited a tremendous strength with an excellent cost/ performance ratios that led the computing practices to follow a completely different path.

The practice of using networks of computers following the concept of an open system (a collection of independent computer systems ranging from expensive supercomputers to cheaper PCs can now be interconnected using LAN or WAN, and standard interfaces to build up a robust distributed computing system) can now be driven by an OS, which will cast this arrangement to its user simply a single coherent system, yet will control the operations of multiple machines present in the framework in a well-integrated manner. This type of OS is known as the distributed operating system, which is a common OS shared by a network of computers (not computer networks) or is used to drive a computer system having multiple processors (NUMA model multiprocessor). Generic distributed OS is, however, committed to address a spectrum of common functionalities, which are distribute computations, real distributions and sharing of resources and components, computation speed-up, smooth communications, scalability (incremental growth), fault tolerance, and reliability. Distributed OS, however, differs in forms as well as in issues when used in networks of computers (multicomputers) from those when used in the multiprocessor system. The representative distributed OSs that came from IBM around 1995 onwards are MVS, MVS/ХА, MVS/ESA, OS/390, and z/OS. Real-time system is another type of computing system used to handle real-time applications that follow mostly a different type of computing known as the real-time computing that processes a huge number of events in terms of bursts of thousands of interrupts per second, in particular possibly without missing a single event. Such systems require a different type of OS for its management known as the real-time operating system (RTOS).

For more details about the modern operating systems, see the website: http://routledge. com/9780367255732.

Genesis of Computer Organisation and Architecture

The development in the design of computer organisation and its architecture has been evolved over a considerable long period of time. From the regime of mechanical calculators sometimes in the seventeenth century, the next odd 150 years elapsed with a lot of development, enhancement, and ultimately the first concept of a general-purpose program-controlled computer, which was conceived by Charles Babbage in the nineteenth century, but was not finally implemented until the 1940s. The first major step was the inclusion of electronic technology discarding the then mechanical components that have eventually passed through generations after generations, and at present, the sixth generation is prevailing. Each generation is identified as a breakthrough in technology from its recent past generation and has been distinguished by its major characteristics. It is interesting to note that despite rapid technological advances, both in hardware and in software, the design in laying the logical structure of the computers as proposed by Von Neumann, and others in 1940s, has improved rather slowly. Sometimes an innovation in design awaits the arrival of a suitable technology for its implementation. A change in design and architecture of computer often demands high cost in modifications of existing application programs, or sometimes even requires developing them afresh. Once the system software along with its underlying particular computer hardware becomes popular and widely acceptable, it is observed that the users are very reluctant to switch to other computers requiring radically different software. Market forces also play an important role for a particular design feature to evolve. Large manufacturers by their dominance in the market also promote certain features to appear.

A brief detail of the major distinguished characteristics of different generations is summarized in a tabular form in Table 1.2 given in the website: http://routledge. com/9780367255732.

Summary

From the very early days, the computer organisation and its architecture have been continuously developed aiming always towards building up a computer system with even more processing power and enough capacity using the contemporary available technology that continuously strives to fulfil the constantly growing requirements of the user community as a whole with highest possible performance at reasonable cost.

Starting from the basic organisation of the modern computer as proposed by John Von Neumann and others which is still being followed, by and large, even in the design of today's large supercomputer, the computer organisation and architecture have then gone through many innovations and improvements giving rise to architecturally different generations starting from the zero generation to the current sixth generation. Each such generation is demarcated by radical change in the use of more advanced emerging hardware technology, which is caused mainly due to the rapid progression in the arena of electronic engineering and technology. Emergence of semiconductor technology followed by IC technology and subsequently the introduction of VLSI technology has had a profound impact, which is considered as the main driving force for the formation of computer architecture and design, from the single-chip microprocessor with pipelined and superscalar architecture and the high- capacity RAM chip to the proliferation of large-scale network of computers, such as computer networks, distributed computing systems, cluster architecture, and special-purpose real-time systems. The relentless improvement in chip fabrication technology, however, supplemented the scenario with eventually giving rise to a spectacular development in CPU architecture: the multicore architecture, which consists of multiple CPU (core) within a single-processor chip {chip multiprocessor). Continuous development and rapid advancement in all the areas of hardware and software technology are still in progress. This suggests that major advances in computer design will still continue in days to come.

This chapter has attempted to summarily consider many aspects of computer organisations, its architectures, and operations, including the different types of OSs that are required to drive various kinds of the underlying architectures. Much of the terminology needed to deal with the subject is introduced, and an overview of some of the important concepts has been presented. The subsequent chapters will provide a rather complete explanation of these terms and concepts, and will put the various parts of this chapter into proper perspective.

Exercises

  • 1.1 Define computer hardware and computer software. Show with examples in the light of these terms, the differences between general-purpose and special-purpose computer. Explain the concept of the dual nature of hardware and software.
  • 1.2 What are the main features observed in Babbage's analytical engine that have along bearing in the design of modern computers?
  • 1.3 Discuss the salient features of Von Neumann concept in computer design. Show with diagram the implementation of this concept in a real computer system which has been produced in those days. What are the limitations of Von Neumann concept (also known as Von Neumann bottleneck)?
  • 1.4 Discuss briefly Princeton architecture and Harvard architecture.
  • 1.5 State and explain the principles that led to the development of IAS computer. Show with diagrams the different components of IAS computer and explain the steps of its operation.
  • 1.6 What is meant by the term stored-program concept? Harvard-class machines use separate memories for program and data, while Princeton-class machines exploit single memory for both program and data. Discuss the advantages and disadvantages of these two classes. Which class do you consider the most widely used and why?
  • 1.7 Let A = X(l), X(2), X(100) and Y = Y(l), Y(2), ...., Y(100) be two vectors (onedimensional arrays) consisting of 100 numbers and each that are to be added to form an array Z such that Z(I) = X(I)+Y(I) for I = 1, 2, ...., 100. Using the IAS instruction set, write a program for this problem.
  • 1.8 What is meant by "generation" of a computer? What are the inventions and developments that laid the foundation of respective generations? What are the distinctive main features that categorized different generations of computer?
  • 1.9 What are the primary resources of a computer system? Describe the functions performed by each of the following components of a computer system: CPU, main memory, I/O processor, OS, compiler, and utilities.
  • 1.10 The terms software compatibility and hardware compatibility are commonly used in computer architecture. What do they mean? Discuss their role in the evolution of computers.
  • 1.11 Explain Moore's law. Discuss its significance and implications in making predictions on the technological developments of ICs of the forthcoming days.
  • 1.12 What are the key characteristics that must be present while injecting the family concept in a series of computers or processors?
  • 1.13 "The OS of a third-generation mainframe computer like IBM 360/370 series was considered versatile in comparison with its predecessor, the second generation": state the generic notable features they have included in their design.
  • 1.14 What is meant by LSI technology and VLSI technology? Describe their influence on the design and application of both general-purpose and special-purpose computers.
  • 1.15 Define the concepts of VM and virtual memory, and describe briefly the differences between them.
  • 1.16 Define microprocessor. Why is it so called? "Today's microprocessors are immensely powerful". Discuss the main features that made them so.
  • 1.17 Briefly describe the main architectural features with examples that distinguish between microcomputer, minicomputer, and mainframe computer.
  • 1.18 "The demarcation line being drawn in the past between mainframe (large), mini-, and microcomputer systems is much less valid today": justify the statement.
  • 1.19 Define the embedded systems. Discuss the importance of this system both from the angle of computer designers and from the users' point of view.
  • 1.20 What are the key distinguishing features of a supercomputer and those of a large mainframe system?
  • 1.21 What is meant by the multicore concept used in the present day's microprocessor technology? Enunciate the distinguishing features present in the products developed on the basis of this concept.
  • 1.22 Define the real-time application. How does it differ from a non-real-time one?
  • 1.23 Define the real-time computing. State the features that make it different from the conventional computing.
  • 1.24 "A real-time system is said to be entirely different from the conventional multitasking/multiuser system": justify the statement in the light of defining a real-time system.
  • 1.25 State the major design issues that are encompassed in the real-time system development.
  • 1.26 "OS is meant to be often closely coupled with the underlying architecture of the computer system": justify the statement briefly describing the role played by the OS in this regard.
  • 1.27 Discuss the salient and distinctive features that are possessed by a generic third- generation OS.
  • 1.28 Multiprogramming is found in various forms. What are those mainly? How do they differ from one another?
  • 1.29 Discuss, in the light of computer architectures and related OSs, the differences that exist between network of computers and computer networks.
  • 1.30 Define the distributed OS. How does it differ from the NOS?
  • 1.31 What are the main objectives that must be fulfilled by a distributed OS? Name some application areas that are conducive to the distributed OS.
  • 1.32 RTOS is something different than its counterpart conventional multitasking OS: in which ways it differs.
  • 1.33 What is the basic design philosophy being followed while the policy and mechanism of a generic RTOS is framed?
  • 1.34 Explain why you consider that the RTOS is getting more and more importance with passing of days.

Suggested References and Websites

Alpert, D. and Avnon, D. "Architecture of the Pentium microprocessor." IEEE Micro, vol. 13, no. 3, pp. 11-21, June 1993.

Augarten, S. Bit by Bit: An Illustrated History of Computers. New York: Tickno and Fields, 1984.

Blaauw, G. and Brooks, F. Computer Architecture: Concepts and Evolution. Reading, MA: Addison- Wesley, 1997.

Brey. Barry B., The Intel Microprocessors, 8th ed. Pearson Education Inc., 2009.

Farrell, J. J. "The advancing technology of Motorola's microprocessors and microcomputers." IEEE Micro, vol. 4, no. October 1984, pp. 55-63, October 1984.

Henning, J. "SPEC CPU2006 Benchmark Descriptions." Computer Architecture News, September 2006.

Prasad, N. S. IBM Mainframes: Architecture and Designs. New York: McGraw-Hill, 1989.

Schaller, R. "Moore's law: Past, present, and future." IEEE Spectrum, vol. 34, no. 6, pp. 52-59, June 1997.

Charles Babbage Institute: Provides the history of computers and links to different sources of information.

Top 500 Supercomputer site: Provides architecture and organisation of current supercomputer products.

Intel website: intel.com

 
<<   CONTENTS   >>

Related topics