|Introducing the Operating System|
Section two examined the typical organization of a computer system and its main elements. We have identified the role of the CPU and its main components. Popular processor technology was reviewed, discussing in particular the characteristics of the Motorola and the Intel chips. A range of memory issues were discussed, ranging from chip characteristics through to memory storage and retrieval capacities. We have also examined the main connection between all these elements of the computer system, in other words the communications channel. Finally we examined the facilities in place to enable input, storage and output of data.
We will now consider how the operating system can support the computer system. Rather than focus on a particular operating system we will examine a general model of an operating system.
The Introduction of the Operating System
Introducing job card control
Initially computer systems did not have operating systems. A request to load a program was both time-consuming and prone to error. The computer system had to be manually set prior to executing each program. This included loading a bootstrap program and manually applying between 1 to 200 binary switches. Obviously while the switches were being applied the computers processing power was not being used. During the early 1960s a number of preparation jobs were automated. The arrival of IBMs disk monitoring program allowed the storing of programs on disk. In addition job control cards were used to tell the monitor which job to work on next. This crude operating system loaded the program from disk and continued running the program until another job control card intervened.
The next major advancement was the introduction of spooling systems; the acronym deriving from simultaneous and peripheral on line. Spooling can be described as being an early form of multiprogramming. Devices could be shared between a number of active programs where output program data could be temporarily placed in storage devices until the required device such as the printer was available. This allowed the output from the last job to print whilst processing of a new job continued. Three programs facilitated this task:
1. the input spool program, this read input data and copied it to a disk
2. the output spool program, this copied the output from the disk to the printer
3. the applications program, this continued with the real processing work.
Improving CPU utilization
A move to get better CPU utilization followed. Next saw the introduction of timesharing systems and multiprogramming systems. Time-sharing (sometimes known as time-slicing) allowed a program to have a set amount time using the CPU, where the CPU would then be taken away from the program. The status of the program (and any results) would then be written to a disk, allowing the CPU to begin processing the next program. Multiprogramming allowed for multiple jobs to be present in the memory at any one time. A program would run until it required intervention from an 110 device. Whilst waiting for I/O the next program would have a turn at processing. Modern mainframe computers quite often use a combination of both multiprogramming and time-sharing. Nowadays, the CPU can be described as the brains of the computer system, whereas the operating system is described as the central nervous system.
Sharing and controlling resources
The Operating System is now considered to be the overall manager of all the resources within the computer system. It can manage and protect resources in a multi-user environment. It facilitates the sharing of costly resources and information. It keeps track of who is using what resource, resolves conflict between users, allocates and de-allocates resources and charges users for its services. It is basically a collection of system programs that co-ordinate and control all the computer operations. It turns the hardware into a computer. An operating system will aim to provide a suitable background where user programs can be successfully executed.
Other objectives of an operating system are to:-
provide a suitable interface between the user and the machine
ensure the most efficient use of its available resources
control the selection and operation of any required peripherals
decide the order that jobs should be run (scheduling)
load and run programs
deal with errors as and when they occur
maintain an audit of all tasks undertaken
enforce security mechanisms
insulate the user from detail they do not need to know
obtain optimum utilization of the computers resources.
Strategies of Operating Systems
This technique enables a computer to handle simultaneous users and peripherals. Each computer operation is performed in sequence, but the high speed of the operation will, for example, depend on the use of timesharing which will appear to provide a simultaneous multi-user service. In other words, timesharing provides multiple access to a computer system where each user in turn is allowed a slice of CPU, although each user thinks they have continuous use of the computer system.
Any operating system that facilitates timesharing will have to use multiprogramming. A multiprogramming system appears to allow several programs to be active at the same time. Multiprogramming systems typically have only one single processor. Where a single processor is used, the CPU will work on only one program at any one time, according to the priorities and actual requirements of the program. In other words, programs (or jobs) take it in turns; one job uses the resource for a while, while other jobs wait in a holding area. An obvious benefit of multiprogramming is, of course, that better use is made of a computers resources. The environment will allow several programs to reside in the main memory, and to share the CPU. Multiprogramming will be able to better keep the CPU active. Program execution time is saved because a program is already waiting and ready to go as soon as an active program is ready to go into the waiting state. Facilities exist to support both multi and single users.
Multi-parallel and concurrent processing
Here there will be a number of processors all managed by the process manager. The process manager will facilitate the simultaneous running of as many jobs as there are processors.
Jobs are collected together (over a set period of time), perhaps onto tape, disk or other secondary storage device. During the prescribed processing time (perhaps evening when demand is lower) the jobs will be released and executed in sequential order. The jobs themselves may be input via some specified storage area. In many cases, multi and single users can be facilitated with this type of operating system. A number of modern operating systems support batch processing including, DOS, UNIX, MVS, DEC, VMS, Windows 95 and Windows NT. Consider the autoexec.bat file on your PC, where a collection of program instructions are batched together in one file and executed automatically. Indeed the autoexec.bat file forms one of a batch of files that are automatically executed on start-up of your PC.
In general PCs are single user computer systems and do not have facilities to support multiple users. However, improvements in technology and operating systems have resulted in far more sophisticated computer systems that in some cases will allow for multitasking and distributed processing through the additional used of networking technologies.
These are used in time critical situations, for example in patient monitoring system where immediate response is life-critical. Other examples include air defense systems, air traffic control, and telecomms. This type of operating system will depend heavily on meeting performance deadlines, that is a response will be required by a set time. Process control and scheduling technologies play leading roles in this type of operating system.
Network operating systems
This type of operating system has developed as a result of advancements in communications technology and existing timesharing technology. It should not be confused with traditional multiprocessing operating systems, although in some installations multiprocessing is possible. Table 6.1 (over the page) details a number of more popular operating systems, and gives an indication of their capabilities and what platform they are compatible with. The list is by no means definitive.
Examples of operating systems
Single user = I user running one application/program.
Note: Please note these operating systems are prone to upgrades that may include a number of changes/facilities available with the operating system.
The General Model of an Operating System
The previous section clearly shows the range of capabilities that can be expected from a number of operating systems. It should also demonstrate how complex an operating system is in order to undertake such duties. As such, we will not be examining specific operating systems, rather we will be looking at the general model of an operating system. The general model will include the support of four managers:
Process Manager (PM)
Memory Manager (MM)
Device Manager (DM)
File Manager (FM).
All the managers work together to support the overall function of the operating system, in addition they each have their very own distinct roles. The managers will continuously monitor their resources. They will apply policies to establish which jobs gets what resource when and for how long. They will all have a procedure for allocating resources and reallocating resources.
The process manager(PM)
The PM is responsible for scheduling jobs (or programs), scheduling processes within jobs, managing job interrupts, and where applicable managing multiple processors. The PM also has two support supervisors, namely the Job Scheduler and the Process Scheduler. The Job Scheduler is a higher level scheduler and will select a job from an incoming queue and sequence the job according to the available resources. It tries to achieve a balanced mix of both computation and 1/0 reliant jobs. The Process Scheduler is a low level resource scheduler. It determines the order that processes may run. It allocates the CPU resources to the process and decides when the process should be interrupted.
The device manager (DM)
The DM aims to maintain a balance of supply and demand between jobs and available resources. In other words its matches resources with job requirements. The DM will manage all peripheral activity, this is achieved by regularly tracking the status of all on-line devices, determining the devices policy, allocating device resources and de-allocating device resources on completion of task. Device management becomes more complicated as different devices may be used in a variety of ways - a device may be used on a dedicated basis or shared on virtual basis. A dedicated device will serve one job only for its duration of execution. A shared device may be shared between processes (or different jobs) and so may result in conflict issues. A virtual device provides a combination of both dedicated and shared facilities where the job output from one job may be held in a buffer. This allows the device to begin working on another job, then resume with the previous job when ready.
The memory manager (MM)
The MM is responsible for allocating memory and ensuring that active programs in memory are kept apart. The MM will use a number of policies to ensure maximum allocation of memory and efficient re-use of memory. Policies exist to deal with single block memory where sequential access to memory is available job by job. Partition memory exists where a number of partitions can be created within memory, each accommodating a different job. Virtual memory provides additional memory that is not stored entirely in main memory, although the user has the illusion that large amounts of memory exist. Virtual memory will reside in some form of secondary storage.
The file manager (FM)
The FM controls every file in the system. It knows where (logically) and how (physically) every file is located. It will operate a storing policy to ensure efficient storage use. It will allocate a file to a job, maintain a file usage log, de-allocate file and return the file back to store. Finally it will communicate to the system when the file again becomes available.
This section has introduced the role of the operating system. We have examined the early use of job card control, which went on to be used with IBMs disk monitoring program in the 1960s. The next major advancement identified was the use of spooling systems, which facilitated the sharing of a processor between a number of programs and a number of devices. The need for improved CPU utilization was identified. We have described how the operating system supports the sharing and controlling of the computer systems resources, giving examples of a range of operating systems and the processing techniques they support. Processing techniques mentioned include timesharing, multiprogramming, concurrent processing and batch processing. We introduced the general model of an operating system and its four key elements, including the Device Manager, Process Manager, File Manager and the Memory Manager. Finally we have identified the need for each manager to work together in support of the overall computer system, whilst maintaining individual responsibility for each of the resources under their control.