Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Segmentation

Virtual memory system when implemented with a fixed-size page causes certain inconvenience with regard to program (both user and system) size, and the logical structure of the program to be executed. Actually, an application is essentially a collection of related program units. The units may vary in their size. Each unit contains closely related functions and/or data, and may obtain services from other units. The units are effectively a higher level of structuring an address space, and are, however, independent as far as compilation of the program is concerned, but are integrated at link, load, or run-time. Program developers really are not at all aware of how these units are mapped into a process's private address space during the execution. In fact, when a program runs, it appears that certain units of the program have almost filled up its allocated address space, while the other units of the program may have lots of room as unused space. One solution for this hassle may be to take some space from the unit having much room to release, and giving it to the unit which deserves it. This shuffling for the sake of smooth execution of the program is the additional headache of the programmer which is a nuisance at best, and a tedious unproductive work as well.

To relieve and to release the programmer from this burden of organizing the program by dividing into modules for proper memory utilization, a straightforward and extremely general solution is to provide the machine with many completely independent address spaces, called segments. In segmentation, a logical address space is organised into a collection of disjoint segments. Each segment constitutes a separate address space with a linear sequence of addresses, from 0 to some maximum. Different segments may, and generally do have different lengths which can even grow or shrink independently without affecting the others during execution (dynamic). From the program developers' point of view, memory consists of multiple disjoint address spaces or segments, each one is a single logical entity. They accordingly divide the programs and data into logical parts (called segments). Procedures, subroutines, array of data, tables, etc. are examples of segments. Segments may be generated by the programmer, or by the operating system. Users are not at all concerned with which code/data block (logical part) is mapped into which segment in the logical address space, and also the relative order of segment placement in the logical address space. Memory when managed with a strategy that allocates main memory by segments is called segmentation, and this memory management technique is called segmented memory management. During execution, when a segment is needed, but not currently available in the main memory, the entire segment is then fetched from secondary memory (virtual memory). It seems reasonable to maintain complete segments in the main memory as opposed to paging system which uses a part of it. A segment usually does not contain a mixture of different types of information like a procedure and an array, a stack and scalar variable, etc.

The address generated by a segmented program called logical address is converted into corresponding physical address by the memory management unit in a way similar to the virtual memory mapping concept. A complete program, and its data sets can be viewed as a collection of linked segments. The links indicate that a program segment may use or "call" another program or data segment. In a multitasking environment, the common programs that are being shared by other programs can be kept in one segment for the use of the others without having to produce multiple copies. An ideal example in this regard is a shared library.

Pure Segmentation

The segmentation and paging differs both in approach (strategy) as well as in implementation. Segments are user-oriented concepts, providing logical structures of programs and data in the virtual address space. Paging, on the other hand, concentrates on the management of physical memory. In a paged system, all pages are of fixed-size in contrast to the segments that are variable, indeed dynamic in size, and all page addresses form a linear address space within the virtual address space. The physical addresses assigned to the segments in memory are maintained in a memory map called a segment table. This table may itself be a relocatable segment.

The segmented memory is organised as a 2D address space. An address in this segmented memory has a two-part address, the prefix field called the segment number and the postfix field called the offset (address) within the segment. The offset targets the location within each segment form one dimension of the contiguous addresses. The segment numbers, not necessarily to be contiguous to each other, form the second dimension of the address space. When memory is segmented, and if one segment is evicted due to the need of the executing program, and a smaller segment is brought into that area, and if this process continues which is an obvious phenomenon during run-time, the memory will ultimately be divided into a number of chunks, some containing segments, and some containing holes. This phenomenon, popularly known as external fragmentation (also called checker boarding) wastes a good amount of memory in the holes, and is not at all desirable. This drawback although can be overcome by using the technique of memory compaction, but adds an expensive overhead from the administrative point of view. This, along with other inconveniences as already observed make pure segmentation ultimately drop from favour.

A brief detail of pure segmentation with respective figure is given the following web site: http://routledge.com/9780367255732.

Segmentation with Paging

When the segments are large, it may be inconvenient, and even sometimes impossible to accommodate them entirely in the allotted limited main memory. This leads to the idea of dividing each segment into pages, so that only those pages that are actually needed at any point of time have to be around. Paging and segmentation thus can be wilfully combined to make use of the advantages of both. Thus, virtual memory can be implemented with a type of paged segments. Within each segment, the addresses are divided into fixed-size pages. Each virtual address (logical address) is thus divided into three fields. The upper field is the segment number, the middle one is the page number, and the lower one is the offset within the page. The memory map then consists of a segment table, and a set of page tables, one set for each segment. The segment table contains for each segment an address pointing to the base of the corresponding page table. The page table is used in the usual way to determine the required physical address. Figure 4.16 shows a scheme of address translation of a virtual address to a main memory address when virtual memory is implemented under segmentation with paging.

The distinct advantage of breaking a segment into pages is that a contiguous region of main memory is not needed to store the entire segment. The segment broken into pages, and the corresponding page frames need not be contiguous, and hence, the various space allocation strategies like best fit, first fit can be dispensed with low overhead. Different tradeoffs do exist among the size of the segment field, the page field, and the offset limit. This set places a limitation on the number of segments that can be declared by users within the permissible range offered by the operating system, the segment size (the number of pages within each segment), and the page size. All these three parameters (number of segments, segment size, and page size) although are determined and managed by the operating system, they are entirely dependent on, and have a bearing on the architecture of the system being used.

Paged Segmentation in Mainframe (IBM 370/XA)

The representative mainframe IBM 370 series uses a two-level virtual memory structure referring two levels as segments and pages. Different page sizes, and segment sizes have been used in this series, but the flagship 370/XA architecture uses page size of 4K bytes,

FIGURE 4.16

Paged segment memory management. Conversion of a two-dimensional virtual address into a physical main memory address.

and the segment size is 1M byte. The virtual address space is of 2 GB, thereby requiring a 31-bit (2 GB = 231 bytes) virtual address to access. There is one segment table for each virtual address space. Multiple virtual address spaces, one per process is provided by the most powerful and popular multiple virtual storage (MVS) operating system used in 370 and higher versions of IBM systems, and also in the operating systems used in the DEC VAX systems. Each virtual address space is made up of the number of segments, and there is one entry in the segment table per segment. Each segment, as usual, has associated with it a page table containing entries, one entry per page in the segment. Thus, each 31-bit virtual address consists of the byte index specifying one of 4K bytes (212) within a page, the page index specifying one of 256 pages (2s) within a segment, the segment index identifies one of 1024 (210) user-visible segments, and the bit-0 is used for other purposes.

Use of Translation Lookaside Buffer (TLB) and an innovative page replacement technique are two additional salient features of system/370 memory management. The exact implementation differs from one model to another model, and again depends on the architecture of the systems and the type of operating systems being used.

A brief detail of the segmentation with paging architecture used in the IBM 370/XA system along with its address translation mechanism with relevant figures is given in the following web site: http://routledge.com/9780367255732.

Paged Segmentation in Microprocessor (Intel Pentium)

The evolution of more powerful 32-bit architecture developed by different manufactures including Intel, Motorola, and others were quite general-purpose. The earlier versions of Intel Pentium (Pentium II and Pentium III) with its in-built hardware supports provide a memory management scheme that exploits the features of both segmentation and paging. It is interesting to note that both these mechanisms can, however, be made disabled at will, thereby allowing the user to choose from four different memory management schemes, namely,

i. Unsegmented memory management

ii. Unsegmented paged memory management

iii. Segmented unpaged memory management

iv. Segmented paged memory management

Each such management has its own merits in certain environments while it is used.

Intel Pentium (32-bit) family exploits the features of both paging and segmentation in which the segment can start at any location, and overlapping of segments is allowed. When segmentation is used, each virtual address consists of a 16-bit segment number (reference) and a 32-bit offset. Two bits of this segment reference are used for protection mechanism, and the remaining 14-bit is used to specify a particular segment. Thus, while with the unsegmented memory, the user's virtual memory is 232 = 4G bytes, in the segmented memory, the total virtual space as could be conceived by a user is 2(32 +14’ = 246 = 64 terabytes (T bytes). The physical address space, however, employs a 232 address for a maximum of 4G bytes. If the segment size is itself 4G bytes, equal to the size of the entire physical memory, it indicates that only one segment is in the main memory, i.e. the segmentation is essentially disabled. Virtual address space is divided into two parts. One-half of the virtual address space (246 2 = 245K bytes) is global, shared by all processes, the remainder is local, and is distinct (mutually exclusive) for each process.

A virtual address consisting of a 16-bit segment selector, and a 32-bit offset is submitted for address translation mechanism as depicted in Figure 4.17. This 16-bit segment selector consists of a number of fields, one of these is a segment number/index field which points to a 32-bit entry in the segment table. The contents of this 32-bit entry is then used for paging mechanism which is actually a two-level table lookup operation. The first level is a page directory requiring 10 bits which contains 1,024 (= 210) entries. This splits the linear memory space (232 = 4G byte) into 1,024 (210) page groups, each one is 4M byte in length (232+2W = 222 = 4 Mbytes). Each such page group (i.e. each entry in the page directory) contains an address that corresponds to the base address of its own page table. Each page table contains up to 1,024 (= 210) entries. The content of each such entry in the page table corresponds to the base address of a single page, each page size is 4 Kbytes (222-s-210 = 212 = 4K bytes).

Memory management has the option of using one page directory for all processes, or one page directory for each process, or some other combination of the two. The page directory for the current task is always in main memory. However, page tables may be in virtual memory.

FIGURE 4.17

Memory address translation mechanisms in Pentium.

Pentium processor provides a new enhancement with provision of two different page sizes to use. The PSE (page size extension) bit found in Page Directory Entry permits the OS programmer to define a page as either 4K bytes or 4M bytes in size. When the 4M byte page size is used, there is only one level of table lookup for accessing pages. When the MMU accesses the page directory, it finds that the page directory entry has its PSE bit set to 1. The high-order 10 bits (i.e. bits 22-31) of the 32-bit address then defines the base address of a 4M byte page in memory. The remaining 22 bits out of 32-bit address is the offset within this 4M byte (222) page, and is then used to access any location within this page. The use of 4M byte page size, however, significantly reduces the memory management overhead (the time consumed to access a particular location) as well as the huge storage requirement for maintaining page directory and page table when a large main memory is used.

As usual, translation lookaside buffer (TLB) is used, that can hold 32 most recently used page table entries, to directly access the physical location at the time of address translation process. Each time the page directory is changed, this buffer is made cleared. In addition, four control registers are used to handle regular paging and page-fault situation when the cache miss occurs. For the sake of clarity, the translation lookaside buffer and memory cache mechanisms are not included in Figure 4.17. Interested readers can consult the book of the same author [Computer Architecture and Organisation, First Edition, P. Chakraborty].

A brief detail of the segmentation with paging mechanism used in Intel Pentium system exhibiting different tables, different formats of each entry, and their implications used in different tables with related figures, along with the address translation mechanism with relevant figures, is given in the following web site: http://routledge.com/9780367255732.

 
<<   CONTENTS   >>

Related topics