Home Computer Science
Table of Contents:
The most fundamental and major machine instruction type available in any instruction set of a computer is data transfer instructions. In fact, the kind of data transfer instructions to be included in an instruction set indicates the kind of trade-offs that the designer intends to achieve. Data transfer instructions move data from one place in the computer to another with no change in the data content. Such instructions must specify several things:
i. The location of the source and destination operands must be specified. Each location can be a memory location, a register, or the TOS that can be indicated either in the specification of the opcode or in the operand itself. For example, the opcode LR means load register, which implicitly implies that both the source and the destination location are registers (called register transfer). But the opcode LD indicates that the source is a memory location and the destination is always a register. A lot of different opcodes of this type are available, which again vary from machine to machine;
ii. The length of data to be transferred must be specified. Again, this may be specified either in the specification of the opcode or in the operand itself. The length specification depends greatly on the size of the register, the word length of the memory, and some other factors related to the machine organisation;
iii. The operand given in the instruction sometimes is somewhat dependent on the addressing mode of the instruction. The addressing mode stipulates a rule for interpreting or modifying the operand field of the instruction to locate the operand to access. On some machines, the mode is embedded in the opcode itself, or different opcodes may be used to indicate different modes. On other machines, a separate field in the instruction is used to indicate the mode, which is then used to locate the operand. We will discuss the addressing modes separately in later sections.
From the point of view of CPU involvement, the data transfer operations seem to be the simplest type. If both source and destination operands are registers, then this operation is totally internal to the CPU, and the CPU simply transfers the data from one register to another. If one or both operands are in memory or at the TOS, then the CPU must perform some or all of the following actions:
A list of the most commonly used data transfer instructions that are found in many machines is given below to give an idea of these types of instructions.
A variety of I/O instructions are available in different machines, depending mostly on the approaches taken in I/O organisation, such as programmed I/O, DMA, and above all, the availability of a stand-alone separate I/O processor. The I/O instructions ultimately transfer data between the processor registers and I/O modules or terminals. A list of the most commonly used I/O instructions that are found in many machines is given here for a clear understanding and ready reference.
Transfer of Control
In the normal course of execution, the instructions are fetched one after another from consecutive memory locations in sequence using the PC and then are executed by the CPU. However, in any program, there are a good number of instructions which when executed change the sequence of instruction execution by modifying the PC to contain the address of some other desired instruction located in memory. There exist, of course, certain substantial reasons as to why transfer-of-control operations are really required. The most common types of transfer-of-control operations found in the instruction set of many machines are as follows: 
The other approach also found in use is with a three-address instruction format. Here, both the comparison and the target instruction address are given in the same instruction. For example,
Different machines use different mnemonic names for same type of actions, and many machines even have varieties of conditional branch instructions.
Skip instruction is another common form of transfer-of-control instructions. The skip instruction does not require a destination address field; this includes an implied address. Typically, the skip implies that one instruction be skipped; thus, the implied address is the current content of PC plus the length of one instruction. Many machines have and use different types of skip instructions, and usually, SKP (skip) mnemonic is used. A typical example of this type of instruction is ISZ (Increment and Skip if Zero).
Subroutine Call Instruction
One of the finest innovative approaches available in programming languages is the provision of subroutine, which is a self-contained sequence of instructions in the form of a computer program that performs a given computational task. It is incorporated into a larger main program, and may be invoked or called many times at various points in the main program to perform its specified function. Each time a subroutine is called (or invoked), a branch action is performed to arrive at the beginning of the subroutine to start executing its set of instructions. After the execution of the subroutine, a branch is again taken to come back (return) to the next instruction in the main program from where the call took place. Both of these are forms of branching instructions. This mechanism is illustrated in Figure 3.8.
The instruction used in a main program to transfer control to a subroutine is known by different names such as call subroutine (CALL), go to subroutine (GOSUB), jump to subroutine, branch to subroutine, branch and save address, or branch and link (BL). The return from the subroutine is accomplished usually by a RETURN (or RET) statement used as the last instruction of the subroutine. Different machines, however, use different mnemonic names for subroutine call instructions. A few of them are as follows:
Subroutine call mechanism.
In case of a subroutine call, several points are worthy to be mentioned:
i. Subroutine can be called many times;
ii. A subroutine can be called from more than one location in the main program;
iii. A subroutine call can appear in a subroutine itself. This is known as the nested subroutine. The nesting of subroutine can go to an arbitrary depth, of course, to a specified limit as already delineated;
iv. Each subroutine call is matched with a corresponding return in the called program.
The nested subroutine call mechanism is shown in Figure 3.9. The main program includes a call to subroutine SUB1 located at address 8000. When this call instruction is encountered and executed, the CPU suspends execution of the main program and begins execution of SUB1 by fetching the next instruction from location 8000. Within SUB1, there are two calls to SUB2 located at address 9000. In each case, when the call occurs, the execution of SUB1 is suspended and execution of SUB2 then begins. The RETURN statement in each subroutine causes the CPU to go back to the calling program and continue execution at the next instruction after the corresponding CALL instruction.
A subroutine call instruction in the main program consists of an operation code together with an address that specifies the beginning of the subroutine being called. This call instruction is executed performing in effect, mainly two operations:
i) The address of the next instruction of the call in the calling program available in the PC along with other relevant information is stored in a temporary location (stack) so that the subroutine knows where to return in the calling program after the completion of its own execution. This is called the return address-,
ii) Program control is then transferred to the beginning of the subroutine.
Nested subroutine call mechanism.
The last instruction of every subroutine is commonly called the return from subroutine. This instruction when executed transfers the return address (already stored) from the temporary location into the PC. Other relevant information that was already stored is also now reloaded from the temporary location to the respective place in the main program to enable the main program to continue its own execution. The CPU now reads the PC that causes a transfer of program control to the instruction next of the call instruction in the main program. Consequently, the execution of the main program is once again resumed.
Use of stack: This problematic situation of storing the return addresses can be avoided if different storage locations are employed for each use of the subroutine while another lighter-level (most recent call) use is still active. A more general and powerful approach for storing the return addresses is to use a memory stack. When the CPU executes a call, it places the return address on the stack. When it executes a return, it uses the information located on the TOS. The current return address will always be at the TOS. The advantage of using a stack to store the return addresses is that when a number of subroutines are called in succession, all the respective return addresses can be pushed in sequence, one after another, onto the stack. The return instruction being used at the end of the subroutine causes the stack to pop the contents of the TOS, and this value is always the one that is then transferred to the PC. In this way, the return is always to the program which is most recently (i.e. the last one) called a subroutine. Figure 3.10 illustrates the use of a stack for the example as already depicted in Figure 3.9. A subroutine call is, however, implemented with the following machine instructions:
SP<—SP-1: Decrement SP to point to next available space.
M[SP]<—PC: Push content of PC on to the stack.
PQ—Subroutine address: Transfer control to the subroutine.
If another subroutine is called by the current subroutine, the new return address is pushed onto the stack in the same way which will remain at the TOS and so on. The instruction (Return) that returns from the last called subroutine is implemented by way of machine instructions, such as:
PC<—M[SP]: Pop stack (the top element of stack) and transfer to PC.
SP<—SP + 1: Increment SP to access next item.
Use of stack to handle nested subroutine of Figure 3.9.
Apart from storing the return address, the stack implementation is also found to be a flexible approach, even in the event of parameter passing at the time of a call. When the CPU executes a call, it not only stores the return address onto the stack, but also stores in the stack the parameters issued by the calling procedure, to be passed to the called procedure. The called procedure is aware of this fact and can access the parameters from the stack. Upon return, the called procedure can also place the return parameters on the stack, under the return address. While a procedure is invoked (called), the entire set of parameters, including the contents of the register under use as well as the return address stored on a stack, are often collectively referred to as a stack frame, and this is automatically handled by the hardware in one unit and the user need not be concerned in this regard.
In the Intel IA-32 architecture, the processor stack data structure is used for the sake of convenience, to handle entry to and return from subroutines using some specified registers (ESP, EAX, EDI, etc.), and respective stack instructions to handle the stack.
A brief detail of different types of transfer of control instructions is given in the website: http://routledge.com/9780367255732.