Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Sixth-Generation Systems (1991–Present): The ULSI Era

Sooner by 1990, it became possible using VLSI technology to fabricate a CPU and main memory, or even all the electronic circuits of a computer on a single low-cost IC that can be mass-produced. By the mid-1990, the manufacturers could fabricate the entire CPU of a system/360-class large computer along with part of its main memory on a single IC. Today's CPUs are mostly microprocessors, implying that their physical implementation is a single VLSI/ULSI chip, which can be used in machines of a new class ranging from portable PCs to supercomputers that contain thousands of such CPUs. The triumph of the sixth generation began from 1991 and is continuing till date. Computer Networks: Client-Server Model

The immense success of Virtual Machine 370 introduced by IBM (VM370) planted a seminal design concept that manifests in interconnecting a collection of autonomous computer systems capable of communication and cooperation between one another by way of using their hardware and software tools (communication links and protocols). This arrangement is popularly known as the computer networks, a typical variant of a class called the multicomputers. Each such autonomous machine, usually called the host, running under its own OS is capable to handle its own users executing local applications, and constitutes the computational resources to the networks as well.

When these small intelligent machines/workstations (client) are normally interconnected among themselves, and also with a comparatively larger and powerful time-sharing machine (server) using relatively high-speed connections (LAN or WAN) in a network environment, they form a low-cost cluster, called the client-server model. This arrangement, however, enables each member system (workstation/client) to perform their own jobs individually at their own computer, to share and enjoy both the hardware and software resources available in the entire arrangement, and leaves the main computer (server) to be simultaneously shared by a larger number of other users. The outcome was found to be a remarkable one in the design of computer hardware configurations that eventually led the computing practices to follow a completely different path. Computer network sometimes is loosely called a distributed computer system, since it carries a flavour of a true distributed computing system comprising hardware composed of loosely bound multiple processors (not multiprocessors).

For more details about the computer networks, see the website: http://routledge. com/9780367255732. Distributed Systems: Multicomputers

Advancements in computer technology continued, notably in two main areas in parallel, namely the technology that promoted the low-cost small computers with a computing power of a decent-sized mainframe (large) and also the technology that upgraded the networking [introduction of ATM (asynchronous transfer mode)] in both LAN and WAN environments. The potential outcome of these two emerging technologies when combined made it possible to easily connect many powerful computing systems together, comprising a large number of CPUs in this arrangement by high-speed network interconnections. This framework is commonly called the distributed computing systems, another typical variant of a class sometimes called multicomputers. Nowadays, even large systems with their own networks are used as sites replacing the relatively smaller nodes, and then using the same communication methodology forms the massively distributed computing system. It is totally different from traditional centralized systems that consisted of a single CPU, its memory, peripherals, and some terminals. Distributed computing systems that use different forms of distributed OSs, a radically dissimilar software, are commonly referred by the term true distributed system or simply distributed system. Distributed system, however, has been explained in detail in Chapter 10.

For more details about the multicomputers, see the website: http://routledge. com/9780367255732. Large Systems: Multiprocessors

Due to the advent of more powerful VLSI technology, one-chip powerful microprocessor and larger-capacity RAM at reasonable cost have eventually emerged in the mid-1980s that radically changed the traditional definition and the architecture of the multiprocessor (a system having tightly connected multiple CPUs equipped with usually a single shared virtual address space) system. Large-scale multiprocessor architectures then started to emerge with multiple memories that are now distributed with the processors. Here, each CPU can access its own local memory quickly, but access to other memories connected with other CPUs is also possible, but is relatively slower. That is why, these physically separated memories can be addressed as one logically shared address space, meaning that any memory location can be addressed by any processor, assuming that it has the correct access rights. This, however, does not discard the fundamental shared memory concept of multiprocessor rather than broadening the existing idea. These machines are historically called the distributed shared memory systems (DSM) or scalable shared memory architecture model using NUMA (non-uniform memory access). DSM architecture can be considered as loosely coupled multiprocessor, or sometimes also commonly referred to as the distributed computing systems, in contrast to its counterpart shared memory multiprocessor UMA (uniform memory access), considered as tightly coupled multiprocessor or often called the parallel processing system.

Usually, the parallel processing system consists of a small number of processors that can be effectively and efficiently employed, but constrained by the bandwidth of the shared memory resulting in limited scalability. It tends to be used to work on a single program (or problem) to achieve the maximum speed-up. Distributed computing system (loosely coupled multicomputers) can contain any number of interconnected processors with as such no limits (theoretically) and hence is more scalable. It is primarily designed to allow many users to work together on many unrelated problems, but occasionally in a cooperative manner that involves sharing of resources. Unlike multiprocessor, the processors of multicomputer can be located far from each other in order to cover a wider geographical area. While multiprocessor, for being more tightly coupled than multicomputer, can exchange data nearly at memory speeds, but some fibre optic-based multicomputer of today is also found to work very closely at memory speeds. Therefore, the terms tightly coupled and loosely coupled although bear some useful concepts, but any distinct demarcation between them is difficult to maintain because the design spectrum is really a continuum.

In summary, the large systems of this generation with ULSI/VHSIC processors were able to carry out massively parallel processing (MPP) at its innermost level using currently available VLSI silicon, GaAs technologies along with optical technologies. The minimum target of this generation is to attain a speed of teraflops (1012 FLOPs). Scalable architectures have been introduced to perform heterogeneous processing to solve large-scale problems involving voluminous databases of diverse nature exploiting the services of a network of heterogeneous computers with shared virtual memories. Diverse spectrums of elegant architectural evolutions with these ULSI/VHSIC chips have the noteworthy characteristics, which are as follows:


Scalable and latency-tolerant architectures;

Multiprocessors with shared access memory, such as UMA,


Cache-only memory architectures (COMA)

Multicomputers using distributed memories with multiple address spaces.

Computers that emphasize massively data parallelism, etc.

are only a few, apart from many others that have been achieved by this time. All these machines, however, have already reached the targeted teraflops (1012 FLOP) performance.

Machines of this generation are also used to solve real-life applications, which include computer-aided design of VLSI circuits, large-scale database management systems, artificial intelligence, weather forecast modelling, ballistic missile control, oceanography, pattern recognition, and crime control just to name a few. Both the scalable computer architectures and parallel processing computer architectures are, however, expanding steadily to negotiate the forthcoming grand challenges of computing in a better way.

MPP systems of this generation that have been represented, such as Fujitsu (VPP 500), Cray Research (MPP), NEC SX series, HITACHI S-810/20, SR 8000, Thinking Machines Corporation (TMC/CMs), CONVEX C3800, IBM 390 VF, and last but not least, the gigantic IBM z-series,

For more details about the large systems, see the website: http://routledge. com/9780367255732. Supercomputers

A supercomputer, at the time of its introduction in the 1960s, has been simply defined in terms of speed of calculation involving processing capabilities. Today's supercomputers offer extensive vector processing and data parallelism that differ from the usual parallel processors, and can be broadly classified as pipelined vector machines having a few powerful processors equipped with adequate vector hardware and SIMD machines (see Chapter 10) having a large number of simple processing elements (array processor) that put more thrust on massive data parallelism. Supercomputers are mainly used for highly computation-intensive tasks such as weather forecasting, climate research (including research into global warming), oceanography (to predict any unforeseen unnatural event), molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals, etc.), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and many other similar problems. Military and R& D scientific agencies are mainly among the heavy users. Although there were many companies that introduced their own supercomputers or nearly supercomputers, a few of the notable ones of today are as follows:

  • • The Intel ASCI Red/9632 launched sometimes in 1999;
  • • The NEC Earth Simulator introduced in 2002, the fastest supercomputer till the beginning of 2004;
  • • The Intel supercomputer systems (the Paragon);
  • • The IBM Blue Gene/L, the largest supercomputer of today.

For more details about the supercomputers, see the website: http://routledge. com/9780367255732. Real-Time Systems

Availability of numerous low-cost sophisticated modern components using more advanced hardware technology eventually facilitates emerging numerous application areas, including a specific type of applications, which are now extensively used in some popular specialized domain known as the real-time application systems. Examples of such applications include embedded applications (household appliance controllers, mobile telephones, programmable thermostats), real-time databases, multimedia systems, signal processing, robotics, air traffic control, process control systems, telecommunications, industrial control (e.g. SCADA), radar systems, missile guidance and many similar ones. These applications require different types of actions following mostly different approaches, commonly known as the real-time computing, which is becoming an increasingly important discipline nowadays.

A real-time application can thus be defined as a program that should respond to activities in an external system within a maximum duration of time specified by the external system. If the execution of the application takes too much time to respond to, or to complete, the needed activity, a failure can occur in the external system. Such applications thus require a timely response with response time smaller than the response requirement of the system. Hence, this application system is usually executed with a different approach known as the realtime computing, which may be defined as the type of computing in which the correctness of the system depends not only on the logical result of the computation, but also on the time at which the results are generated (response requirement). A real-time system is thus entirely different from the traditional multitasking/multiuser system. It is often tightly coupled to its environment with the real-time constraints imposed by the environment. However, a real-time system can be roughly classified into three broad categories:

  • • Hard real-time systems (e.g. aircraft control);
  • • Firm real-time systems (e.g. banking);
  • • Soft real-time systems (e.g. video on demand).

Since a real-time system, hardware-wise and software-wise, is a totally different one from the traditional general-purpose computing system, its design is encompassed with several different issues, mainly architectural issues, resource management issues, and software-related issues. In addition, it must ensure predictability in instruction execution time, speedy memory access, fast context switching, and prompt interrupt handling. This architecture usually avoids caches and superscalar features, but provides enough support for fast and reliable communication. Thus, the basic principles encompassed with real-time systems are inevitably of numerous dimensions based on specific objectives to achieve. With few exceptions, several techniques are available to implement each of these basic principles. One should always make use of the specific techniques and related mechanisms in the design of a particular RTOS to ensure that only the respective policies which entail the underlying real-time requirements will be ultimately implemented.

For more details about the real-time systems, see the website: http://routledge. com/9780367255732. Yesterday's and Today's Microprocessors

Following the introduction of 80486DX4 in the 1990s, Intel launched a downward compatible 80586 in 1993 labelled as P5. Intel then decided not to use the number anymore with its product to avoid the problems related to copyright of a number. This product, however, then is historically known as Pentium, a generic name of a family of microprocessors. Side by side, Motorola also introduced its downward compatible 68040 processor of 68000 series.

With two introductory versions, Intel Pentium family of microprocessors came out continuously in succession with many different upgraded and newer versions using constantly improved more innovative designs supported by many salient features as days passed on. The newer versions of Pentium, such as Pentium Pro, Pentium II, Xeon, and finally Pentium III released in succession from 1995 up to 2000 with increasing processor speed attaining up to 1 GHz, providing multimedia extensions, or MMX instructions. All these processors were especially optimized for 32-bit code with a specific target to equip it for the server market and thus were often bundled with Windows NT rather than with normal version, like Windows 98. The Motorola 68040 was introduced maintaining a downward compatibility with all its existing predecessors, and it was enriched with many additional significant features that ultimately upheld this processor to a much higher level than all its compatible contenders. The Motorola 68060, the last member of MC 68000 family, was launched in the mid-1990s with many organisational upgrades, mainly a basic four-stage with additional two-stage pipelined superscalar processor of degree 3, and new fabrication features to mostly address and accommodate the constantly growing embedded system market. The AMD processors (like the Athlon) also used IA-32 architecture (introduced by Intel) over quite a considerable period in its products for use in advanced PCs as well as in workstations. These processors eventually attained a performance level that was nearly comparable to the earlier version of Intel Pentium 4.

Motorola, a strong competitor of Intel in this line holding a share of nearly 50% of the microprocessor market over many years with their marvellous products (MC 68000 series), finally decided to shift its activities from the then existing business line and started to put more thrust on a newer upcoming most promising area what is known as the mobile communication. As a result, Motorola has sold out its microprocessor division to a company, now called Freescale Semiconductors, Inc. As a result, Intel, the only giant of earlier days, remains, still continues exerting more efforts, and today almost monopolizes and captures a major share of the entire desktop and notebook market.

For more details about the Pentium series and Motorola processors, see the website: Multicore Concept: Performance Improvement

The performance gains can be mainly achieved with increasing processor speed and enhancing clock rate by way of increasing logic density of the processor chip. But the basic physical limits under the currently available technology that would be viable for the clock speed and IC density to attain at best (to keep the generated heat manageable) for achieving even more performance gains have, by far, already reached very close to these limits, leaving very little scope at present to further go ahead. Side by side, the existing computer design is constantly reviewed and redefined to organise the basic resources (processor, memory, peripherals) of numerously different speeds after making a reasonable balance between them to keep the overall performance improvement ongoing. But still, the basic design with the fundamental resources of even today's versatile computers remains virtually the same as those of the early IAS computer of odd 60 years back. Thus, there can hardly be any revolutionary change that can happen at present in relation to computer design and its organisation from its existing form.

It has been observed that the processor power has relentlessly raced ahead at breakneck speed continuing for about 15 years since the late 1980s, and this tremendous raw speed of the processor could never be properly exploited to yield the expected performance. Two noteworthy design approaches, namely pipelining and superscalar architecture (see Chapter 8), in the area of instruction execution logic have evolved to make use of the potentials of the processor and were then made in-built within a processor design. These two approaches have been then continuously enhanced to obtain more performance gain, and they have already reached nearly their limits and now approaching almost a point of diminishing returns. It seems that any further appreciable enhancement in this direction is hardly possible to achieve; at best, simply a relatively modest progress can only be made. Another important implemented strategy that increases the performance beyond what can be achieved simply by increasing clock speed is the increase in cache capacity and usage, which is essentially an organisational enhancement. As chip density is constantly increased by this time, more room is now available within the chip that is now dedicated to incorporate larger size of more speedy caches inside the chip, and that again with multiple on-chip levels (now three levels). Further significant improvement in this area seems to be likely not possible.

It is now evident that the highest benefits that can be accrued from all these approaches as mentioned, which contribute their shares in yielding performance improvement, have already reached almost their respective limits, and the clock speed cannot be further increased to a much higher rate. This ultimately puts the designers to have now turned to a completely different fundamentally new approach so that the overall performance can be further enhanced within the limit of the present constraints. The principal idea behind this new approach is to build multiple processors on the same chip, also referred to as multiple cores, or multicore, that provide higher potential to further enhance performance without increasing the clock rate as such (Gibbs, W). Thus, the concept of building multiple cores within a processor chip with larger on-chip caches (to resolve the critical power issue in the chip) is eventually recognized and identified as one of the best possible acceptable solutions within the premises of current technical aspects (scenario) to obtain even faster microprocessors.

The IA-32 architecture continued with the first version of Pentium 4 processor belonging to P6 architecture released in late 2000 with higher speed of 3.2GHz and even more. Its later versions called the Pentium D (Dual Core) and subsequently Core 2 were available at speeds of up to 3.2 GHz using 0.045-micron or 45-nm fabrication technology. Subsequently, Pentium 4e and later Pentium 4 Extreme Edition were then introduced. Sometimes in 2001, Intel and Hewlett-Packard (HP) then jointly implemented IA-64 (64-bit) microprocessor architecture, called the Itanium, with many salient features, appropriate to negotiate different prevailing situations (Krishnaiyer, R., et. al.). Both Pentium 4 and Core 2 have been then modified to include a 64-bit core and multiple cores. The notable advancement with this technology is the introduction of a new concept, called multithreading. In 2002, Intel and HP jointly released a new microprocessor architecture in the line of Itanium, called EPIC (explicitly parallel instruction computing), which is essentially designed for the server market, and may or may not trickle down to the personalized home/business market in the near future (Evans, J, et al.). Subsequently, Intel launched dual and quad core and even higher core versions, but in the near future, the number of cores will likely to be increased to eight or even sixteen using even finer fabrication technology, with stepwise increment in cores, which is supposed to be an acceptable alternative solution in the current scenario to provide even faster microprocessors.

It is interesting to note that microprocessors have evolved much faster and gradually became more complex. This is observed in the rapid growth of Intel X-86 family that provides an excellent illustration in relation to the continuous advancement of microprocessor technology that happened over the past odd 30 years. The 8086, introduced sometimes in 1978, was the only microprocessor with a clock speed of 5 MHz and had

  • 29.000 transistors. A quad-core Intel Core 2 introduced in 2008 operates at 3.2 GHz, has a speed-up of a factor of roughly 600, and has 820 million transistors, which is about
  • 28.000 times as many as the 8086. Yet, the Core 2 has a slightly larger package than the 8086 and has a comparable cost.

All these discussions as presented in regard to the constant developments of microprocessor technology, and subsequently, the continuous evolution of various flagship microprocessor products based on existing technology introduced by different manufacturers from time to time, including two giants Motorola and Intel, can be summarily organised in a tabular form with a comparative study. This may be a handy tool that can be used as a good indicator to describe how computer technology has gradually progressed over this period, in general.

For more details about the multicore, see the website:

Grand Challenges: Tomorrow’s Microprocessors

Although it is seemingly difficult to make any accurate prediction in regard to the forthcoming evolution in microprocessor technology, the current trends, however, in this area can at best envisage a realistic path that can pronounce without doubt the success of Intel family which should continue for quite a next few years. What may occur is perhaps a change in RISC technology, but more likely would step forward in the line of this new upcoming multithreading technology, accommodating even more processors in parallel within the framework of the ongoing architecture (at present, seven or more such processors). As the clock speed seemed to have already picked to its limit, and the surge to multiple cores has begun, about the only major change in the Pentium line in the near future will probably be the inclusion of a wider memory path (128 bits) and increasing memory speed. Side by side, a new technology is also required in the area of mass storage (secondary device) to cope the constantly increasing high-speed components constituting today's faster computer system. Flash memory could be a solution, because its write speed is comparable to hard disk.

We have arrived, and are now passing through, the sixth generation. A journey through all these generations reveals that each of the first two generations lasted more or less over a period of ten years using the yields of contemporary development and advancement in electronics. From the beginning of the third generation, the renaissance actually started giving needed impetus resulting in radical improvement both in hardware architecture and organisation and in sophisticated software design and development. Resurgence continued with constantly increasing high pace culminating to the emergence of many innovative concepts, and sophisticated technological developments throughout the entire fourth and fifth generations that can be titled as golden period of computer science and technology. Sixth generation, however, equally maintains the pace of improvements following the last generation's contributions with more refined technological development and innovative architectural improvement.

<<   CONTENTS   >>

Related topics