BlueAdmiral.com

 

 

The Memory Manager
 

Last weeks lecture examined the role of the Process Manager within the general model of an operating system. We have considered a range of policies used by the Process Manager to undertake its responsibilities. We defined differences between a multi-user and a single user system. We examined the roles of the Process Scheduler and the Job Schedulers and how they support the Process Manager to provide 'controlled' access to the CPU resource. We established that access to the CPU will be based on a policy that will for example aim to provide fair, equal, or prioritised access to the CPU. Finally we examined the problems that can be experienced when attempting to shaft resources, in particular we examined the four stages of deadlock.

The purpose of the Memory Manager
The memory manager (MM) has the job of allocating memory for each job to be executed, and then reclaiming memory when a job is completed. Basically programs and data are stored in this part of the computer system. The MM will allocate memory to program jobs, whilst ensuring that the programs in memory remain separate from one and other. Memory management is critical to a system's performance. Performance will depend on:-

o how much memory is available
o how memory allocation is optimised.

Modern memory management must allow for multiple jobs and multiple users.

 

Functions of the Memory Manager

Ultimately, the memory manager must:

  • allocate memory to jobs
  • ensure available memory is used well
  • track allocation of memory
  • maximise memory allocation
  • ensure efficient re-use of memory
  • de-allocate memory when job is finished
  • Amalgamate free memory
  • allow for variations in re-use
  • should be able to cope with overflow
  • ensure a fair and efficient management process

 

Memory Properties

In previous sections we have already met four types of memory, each with particular properties:-

  • Cache - very fast, small in size
  • RAM - fast, large but limited
  • Disk - slow but unlimited in size

DMA
This technique allows peripherals to gain direct access to the main memory of the computer. When the device initiates Direct Memory Access (DMA) the processor is compelled to stop all bus activity - so allowing for higher data rates. It is an I/O technique that allows a control unit to access the main memory directly and so transfer the data without intervention from the CPU.

Virtual memory
This technique is used for relocating the contents of the main memory onto the disk or a backing store. It is not 'real memory'. The technique is used when the sum of the total memory required by all the programs exceeds the physical availability.

Single block systems
Here a single job is placed into the main memory at one time. The entire job is loaded and all memory is allocated to the 'one' job. If a job is too large, then it cannot be loaded. The memory manager will keep track of the job from start to the end address.

Advantages include the following factors:-

  • A simple memory management system.
  • The system is easier to understand and implement.
  • It is more suited to PCs; no partition intrusions problems are likely.

The disadvantages include:

  • Larger amounts of CPU can be wasted through swapping jobs in and out of memory.
  • The CPU is idle during any I/O steps.
  • In the same way, peripheral devices are idle when program is using CPU.

At system initialization the memory is divided into fixed partitions of various sizes according to typical programming needs of users. Partition sizes are static once system powered up. They can be changed on complete shut-down of system, power-up and reconfiguration of system.

Fixed partitioning will:-

  • allow for multiple jobs in memory
  • split memory into fixed sized partitions - remaining in contiguous blocks
  • place one job (only) in each partition
  • protect jobs from jobs running in other partitions - partition intrusion is prevented
  • allocate jobs according to available memory as assigned by the MM using a special table which is known as the memory allocation table.

The Memory Allocation Table provides details such as:

  • partition size (for example in kilobits)
  • memory address (location)
  • access to (job number)
  • partition status (free/busy)

Here the MM will track which partition is in use, which job has the partition and the start and end address of a memory allocation.

The main advantage of fixed partitioning schemes is that they permit:-

  • multiprogramming
  • memory protection.

The disadvantages include:

  • inappropriate partition sizing prevents some jobs from running
  • internal fragmentation.

The effect of fragmentation is discussed below.

 


From the available jobs in the Job queue, the jobs are allocated memory space according to the first available partition that can accommodates the job size. Even though Job 3 requires only 30k (and there is 70k available in partition 1) it cannot run.

Careful consideration must be given to setting partitioning sizes; there is a need to reach some 'balance'. If for example, most partitions are too small, then the larger jobs will have a longer turnaround whilst waiting for the only partition 'big enough'. If however, the partitions are too big , then memory is wasted and any unused memory cannot be allocated to another job. Reaching this happy balance will depend on the types of jobs processed and of course the agreed expected processing times.


Figure 1: Internal fragmentation

 

 

Dynamic partitions
Dynamic partitioning is similar to fixed partitions, however the partition sizes are dynamic (they change). A partition table of use is also maintained and memory is still kept in contiguous blocks. Here, jobs are given as much memory as they require for processing. The jobs are placed into memory according to a predetermined policy, for example 'First Come First Served' (FCFS). Dynamic partitioning is more efficient, but external fragmentation and reallocation issues have to be addressed.

Reallocatable dynamic partition
Under reallocatable dynamic partitioning, the memory allocation scheme is responsible for 'bringing together' the available memory from the empty partitions and external fragmentation. Blocks are compacted together to make one block of memory large enough to accommodate some or all of the jobs waiting for memory.

External fragmentation
Fragmentation occurs when processes are continually swapped in and out of memory. The processes are released into a fixed number of randomly sized partitions. As a result swapping out will leave a number of different sized partitions. In other words, the dynamic allocation of memory creates unusable fragments of memory between allocated blocks of memory. When swapping in the process needs to search through all the different sized partitions to find a fit. The partition is rarely filled by a process. The space between the used partition and the start position of the next partition is called external fragmentation. There has to be a plan in place to collect all the external fragmentation allowing for future reallocation. Amalgamation can be considered here.

Where the total free memory space is greater than the program size, and no suitably sized contiguous space is available to accommodate program, then amalgamation of free space is required. Amalgamation aims to anticipate and provide for the biggest area possible. It will collect all free area, including areas adjacent to another free area, and areas that are isolated from all other areas.

Reallocation policy
The MM will attempt to relocate the free areas using some form of allocation scheme. Schemes are based on algorithms which assume that the memory manager keeps the following two lists:-
1. Free memory blocks.
2. Busy memory blocks.

Some allocation schemes are described as follows:-

Fastest (or First Fit) memory allocation
This scheme searches from the beginning of the free block list. It selects the first block of memory large enough to meet requirements

Most Efficient (or Best Fit) memory allocation
This scheme considers all the free memory blocks. It allocates the block that results in the least amount of wasted space. One problem with 'Best Fit' is the need to completely search the table to find the best fit.

Comparing first-fit and best-fit
The first fit will run through table of free space and allocate to the first space that fits. The best fit will run through table and allocate to space that results in least amount of wastage. Which method is best? There is no straightforward answer - it must depend on the job mix. Recent technological advancements have resulted in faster access times'. As a result the scheme that best saves valuable resources, such as memory space, might be considered better.

Virtual memory
Virtual memory derived from days when computers had very small memories. It was necessary to deal with programs larger than memory. Most of the program would be stored on secondary memory devices and program chunks swapped in and out of memory. Chunks were originally called 'overlays'. Swapping was originally the responsibility of the programmer. Now swapping is automated using overlay swapping that is invisible to the programmer. Memory swapping became known as virtual memory. The virtual memory exists on the disk rather than in main memory. The processor will address a 'piece' of its physical address space (on disk). The located 'piece' (or page) is then transferred to a page frame in main memory. Virtual memory is then implemented through the use of a page table.

 

There are three approaches to the use of virtual memory:

1. Paging.
2. Segmentation.
3. Combined segmentation and paging.

Paging allows for the use of continuous, one-dimensional address space, that is one memory cell after another. Here, jobs are divided into fixed sized blocks called pages. The scheme then attempts to match the job size according to the available memory. It will allocate as many pages as required for a job. The page allocation does not have to be contiguous. Example page sizes (which are fixed by the computer) might be in the range of 128 to 4096 memory locations. In other words the virtual memory exists on the disk. The processor will address a 'piece' of the disk's physical address space. The located pieces are transferred to a page in the memory. The virtual memory is made possible through the use of a paging table. Paging allows a job to load even if there is not enough memory. We have removed the restriction on the maximum program size. Program writers can now consider the memory space as infinite. This infinite RAM does not exist - it is virtual.

The main advantages can be summarized as follows:

  • Improved efficiency.
  • The need for compacting is eliminated.
  • Internal fragmentation is reduced.

The main disadvantages include:

  • Increased HIW and 5/W overheads.
  • Internal fragmentation may occur on the last page.

Page allocation tracking

Many allocation methods are used. Two are of interest:

1. Bit maps
Here a bit array is kept for all pages in each job. If a page is currently being used the bit is set 'on'. A single bit indicates if the virtual page is currently in main memory.
2. Linked lists
Here a marker is placed at the end of each page, from where the address of next page is given.

Demand paging
Up to now we have loaded the whole job into memory. In practice this is not required. Not all parts of a job run at the same time. Demand paging loads only part of a job into RAM. The rest is loaded into pages on the disk. There is a Page File on the disk. As the job runs pages are swapped in and out of RAM.

Page removal policy
A policy needs to exist to remove faulty pages or to bring in a new page. Several methods have been developed for removing pages, two methods include:-

1. First In First Out (FIFO)
This policy removes the page that has been in RAM the longest.

2. Least Recently Used (LRU)
This removes the page that has the least amount of recent activity

Many processors split job memory into segments, for example program code, data and stack. These segments can have a set of independent address spaces. Segments are swapped in and out of main memory in the same fashion as pages. One essential difference is that pages are all the of the same size and segments are not. A segment can actually change size during execution of a program. Also, different segments can be protected in different ways, for example when dealing with data or program code.

Summary
This section has examined the typical role of the Memory Manager within the computer system. We have reviewed a range of memory components and techniques used by the memory manager, ranging from DMA to virtual memory and fixed partitioning through to dynamic partitioning. Finally we have explored some of the problems associated with partitioning and have examined some techniques used to overcome these problems.


 

Back