Home Computer Science
Table of Contents:
With the advent of the revolutionary inventions in the field of electronics, such as integrated circuits (ICs), small-scale integration (SSI), large-scale integration (LSI), and very-large-scale integration (VLSI), the digital hardware became extremely powerful and equally cheaper that consequently accelerated the moves towards gradual emergence of more modern computers with versatile architectures when compared to those of early days. Successful implementation of pioneering concept of microprogramming in the computer architecture has abruptly changed the entire scenario. The microarchitectures of the system then became extremely straightforward and simple enough that made the machines easier to program. More number of new instructions, not all are used regularly, was then continuously added in the instruction set to make the system more upgraded. As a result, the instruction set gradually went on highly complex, and so the microprogram which ran on them became lengthy and equally complicated that started to constantly create hindrances in the normal execution of the regular frequently used operations. The net effect is that the performance was observed to have notably degraded from its normal expected level. This newly introduced instruction set was called an orthogonal instruction set, and this basic set of features is now called a complex instruction set computers, or in short, CISC (pronounced "sisk").
At this juncture, a number of designers started to argue that computers had become much too complicated over the years and that all those designs should be thrown out, and new designs should be started afresh. They recommended that computers must use a set of fewer, simpler, and less orthogonal instructions that would be looked almost like a small subset of the machine instructions of the existing CISC. As these simpler instructions do not require complex decoding, they could be executed much faster within the CPU (central processing unit), avoiding as much slower memory access as possible. All these thoughts together with other relevant aspects when religiously followed, eventually gave birth to a different form of architecture, historically known as RISC (reduced instruction set computer). RISC machines are really among the most interesting and potentially have one of the most innovative new design concepts that can be considered a renaissance in the revolutionary process of architectural evolution of computers. Its introduction has ultimately challenged the traditional approaches followed in the architectural design of CPU by the computer architects of those days. In this chapter, we introduce the major characteristics of CISC and RISC architectures, most of the salient features of CISC and RISC machines, and finally, a comparison between RISC and CISC. We then present the instruction set and instruction format of a few popular representative RISC processors introduced by reputed manufacturers.
Background: Evolution of Computer Architecture
The earliest digital computers starting from ENIAC to CDC 6600 (1962) were extremely simple having a few machine instructions with only one or two addressing modes, directly executed in hardware, and have used very simple high-level languages. The introduction of IBM 360 mainframe series in 1964 with a marvellous new idea of injecting microprogramming in the computer architecture has radically changed the microarchitectures of the system, making it straightforward and simple enough to program it at ease. It consequently ventilated a sparkling idea of a family concept with all its merits to evolve. More number of new instructions was then continuously added in the instruction set to exploit the constantly evolving more superior hardware technology made in-built to make the system more advanced providing many salient features. This newly introduced instruction set was called an orthogonal instruction set, and this basic set of features is now called a complex instruction set computers, or in short, CISC. These advanced machines, however, became capable of handling more modern high-level languages enriched with many powerful features to support a diverse spectrum of evolving application areas for users of different disciplines. In addition, migration of many functions from software implementation into its equivalent hardware realization was also made possible. While all these ongoing activities were appeared to be amazing, but in reality, it invited a lot of problems as this provoked the designers to continuously append many additional features on the existing architecture that were really seldom used. As a result, the internal complexity of the computer and its instruction set gradually went on increasing and is also highly complex. Consequently, the microprograms that ran on them became lengthy and equally complicated that started to constantly create hindrances in the normal execution of the regular frequently used operations. The net effect is that the performance was observed to have notably degraded from its normal expected level. Nevertheless, most of the contemporary computers still follow this concept in one form or the other.
This approach quickly became very popular, and almost immediately, worldwide contemporary CISC computers, both mainframes and minicomputers, were introduced by giant manufacturers, including Burroughs, Univac, NCR, CDC, Honeywell, and DEC.
Even microprocessors from Intel (X-86 family) and Motorola (68000 family) have exploited this approach in abundance with minimal architectures that rapidly progressed and even exceeded the complexity of minicomputers and mainframes of those days. The domain of microprogramming usage, however, got continuously expanded. Most of the frequently used library procedures were then injected into the microprograms to negotiate their calls that summarily reduced the otherwise required frequent visit to slower main memory, thereby substantially increasing the speed of the operations. But the complexity of the underlying microprograms got gradually increased. In addition, arrays and records are dealt with using special addressing modes. Large parts of procedure calls, including parameter passing, stack handling, register saving, etc. are now transferred into the microcode (microroutines). Sophisticated high-level languages with powerful constructs, such as if, while, case, etc. required additional instructions in the instruction set for their effective translation by a compiler. As the ultimate goal of the CISC architecture was targeted to provide a single machine instruction for each statement written in a high-level language, the machine instruction set then became more and more complex, and the burden on the microprograms gradually became heavy. Almost everyone, however, started to believe this trend as positive and an ultimate one, and was sure enough for it to be continued for many years to come.
In the late 1970s, the existing technology began to radically improve, thereby offering much faster processor and relatively speedy semiconductor RAM memory. At the same time, it has been gradually felt by the manufacturers to writing/upgrading, debugging, and maintaining all those intricate microcodes used as a sole backbone to be immensely difficult to continue. A stage thus had been set for someone to realize that computers could be made as simple as possible so as to run a lot faster. This demanded a total elimination of the existing interpreter (microprograms) from the architecture that stood as a stumbling block. A different approach thus started to evolve instead that deliberated each program to be compiled straightaway, close to machine codes, and to be executed out of the fast semiconductor memory by the hardware directly.
Characteristics of CICS and Its Drawbacks
In summary, the major characteristics of CISC architecture, worthy of mention, are as follows: • A small set of 8-24 general-purpose registers (GPRs); • A large number of instructions, typically ranging from 100 to 350; • Some instructions that perform specialized tasks and are seldom used; • Instructions that handle and manipulate operands in memory; • Different types of instruction/data formats of variable length;}} 
RISC: Definition and Features
The term RISC was, however, first used by David Patterson, University of California, Berkley, in the early 1980s, while investigating with a group of his colleagues the design of their RISC-1 architecture. A RISC machine is essentially just a computer with a small number of vertical microinstructions (not microprograms). The philosophy behind it is that user programs after compilation will be translated into sequences of these microinstructions, which are then to be executed directly by the underlying hardware, with no intervening interpreter (microprogram). Eliminating this level of interpretation is ultimately the secret of RISC machine's enhanced speed. The result is that the simple things that programs actually do, like adding two registers, can now be completed in a single microinstruction, in contrast to 8 horizontal microprogramming or 15 vertical microprogramming instructions of the fastest CISC machine, a straightway winning factor of 10. In fact, before the invention of microprogramming, all computers were essentially RISC-like machines with simple instructions directly executed by the underlying hardware.
A RISC instruction set typically contains less than 100 instructions with only 3-5 simple addressing modes, and has a fixed instruction format (32 bits). Most instructions are register based and are executed mostly in one cycle under hardwired control. Memory access is done only by load/store instructions. Since the complexity of the instruction set has been greatly reduced, a higher clock rate and a lower CPI (cycles per instruction) have been realized that ultimately give rise to higher MIPS (million instructions per second) ratings. A large register file consisting of at least 32 registers (nowadays, it is around 100 or even more) is used for general purpose as well as for faster context switching among multiple users.
RISC machines are fast due to the advancement in software, but not in hardware. The improvement in optimizing compiler technology now makes it possible for RISC to generate microcode directly, skipping the interpreter. RISC machines are somewhat simpler than even vertical microarchitectures (CISC). Recently, engineers have found ways to further compress the reduced instruction sets to fit it even in smaller memory systems (e.g. the ARM's (Advanced RISC Machines') Thumb instruction set encoded into a 16-bit halfword format). In applications that do not need to run older binary software, compressed RISCs are expected to be coming out to dominate the future market. The RISC design has further gained power by way of pushing some of the less frequently used operations into its software. A RISC design, for any given level of performance, always has a much smaller "gate count" (number of transistors), the main driver in overall cost/performance considerations. In other words, a fast RISC chip is much cheaper than a corresponding compatible fast CISC chip. Initially, RISC chips prevailed over the market for 32-bit embedded systems. Smaller RISC chips, however, are becoming even more common in the cost- sensitive 8-bit embedded system market. The main market for RISC CPUs has been for systems that require low power or small size and low cost.