OPERATING SYSTEM


Operating systems are so ubiquitous in computer operations that one hardly realizes its presence. Most likely you must have already interacted with one or more different operating systems. The names like DOS, UNIX etc. should not be unknown to you. These are the names of very popular operating systems.

  • Try to recall when you switch on a computer what all happens before you start operating on it. In a typical personal computer scenario, this is what happens. Some information appears on the screen. This is followed by memory counting activity. Keyboard, disk drives, printers and other similar devices are verified for proper operation. These activities always occur whenever the computer is switched on or reset. There may be some additional activities on some machine also. These activities are called power-on routines.  Why do these activities always happen? You will learn about it elsewhere in this unit.
  • You know a computer does not do anything without properly instructed. Thus, for each one of the above power-on activities also, the computer must have instructions. These instructions are stored in a non-volatile memory, usually in a ROM. The CPU of the computer takes one instruction from this ROM and executes it before taking next instruction. ROMs are of finite size. They can store only a few kilobytes of instructions. One by one the CPU executes these instructions. Once, these instructions are over, the CPU must obtain instructions from somewhere. Where these instructions stored and what are are their functions?
  • Usually these instructions are stored on a secondary storage device like hard disk, floppy disk or CD-ROM disk. These instructions are collectively known as operating system and their primary function is to provide an environment in which users may execute their own instructions. Once the operating system is loaded into the main memory, the CPU starts executing it. Operating systems run in an infinite loop, each time taking instructions in the form of commands or programs from the users and executing them. This loop continues until the user terminates the loop when the computer shuts down.
  • In order to exploit the most from a computer, therefore, a deep understanding of operating system is a must.

 

Operating Systems as an Extended Machine and Resource Manager

An operating system is the most important program in a computer system. This is one program that runs all the time, as long as the computer is operational and exits only when the computer is shut down.

 

  • In general, however, there is no completely adequate definition of an operating system. Operating systems exist because they are a reasonable way to solve the problem of creating a usable computing system.
  • The fundamental goal of computer systems is to execute user programs and to make solving user problems easier. Hardware of a computer is equipped with extremely capable resources – memory, CPU, I/O devices etc. All these hardware units interact with each other in a well-defined manner. Bare hardware alone is not enough to solve a problem, particularly easy to use, application programs are developed. These various programs require certain common operations, such as those controlling the I/O devices. The common functions of controlling and allocating resources are then brought together into one piece of software: the operating system.
  • It is easier to define operating systems by their functions, i.e., by what they do than by what they are.  The computer becomes easier for the users to operate, is the primary goal of an operating system. Operating systems exist because they are supposed to make it easier to compute with them than without them. This view is particularly clear when you look at operating systems for small personal computers.
  • Efficient operation of the computer system is a secondary goal of an operating system. This goal is particularly important for large, shared multi-user systems. These systems are typically expensive, so it is desirable to make them as efficient as possible.
  • Operating systems and computer architecture have had a great deal of influence on each other. To facilitate the use of the hardware, operating systems were developed.  As operating systems were designed and used, it became obvious that changes in the design of the hardware could simplify them.

 

Operating systems are the programs that make computers operational, hence the name.

  • Without an operating system, the hardware of a computer is just an inactive electronic machine, possessing great computational power, but doing nothing for the user. All it can do is to execute fixed number of instructions stored into its internal memory (ROM: Read Only Memory), each time you switch the power on, and nothing else.
  • Operating systems are programs (fairly complex ones) that act as interface between the user and the computer hardware. They sit between the user and the hardware of the computer providing an operational environment to the users and application programs. For a user, therefore, a computer is nothing but the operating system running on it. It is extended machine.
  • Users do not interact with the hardware of a computer directly but through the services offered by operating system. This is because the language that users employ is different from that of the hardware. Whereas users prefer to use natural language or near natural language for interaction, the hardware uses machine language. It is the operating systems that does the necessary translation back and forth and lets the user interact with the hardware. The operating system speaks users’ language one hand and machine language on the other. It takes instructions in form of commands from the user and translates into machine understandable instructions, gets these instructions executed by the CPU and translates the result back into user-understandable form.
  • A user can interact with a computer if only he/she understands the language of the resident operating system. You cannot interact with a computer running UNIX operating system, for instance, if you do not know ‘UNIX language’ or UNIX commands. A UNIX user can always interact with a computer running UNIX operating system, no matter what type of computer it is. Thus, for a user operating system itself  is the machine – an extended machine as shown in figure 1

 

1

 

Figure 1: Extended-machine view of operating system.


Operating systems are computers’ resource manager.

  • The computer hardware is made up of physical electronic devices, viz. memory, microprocessor, magnetic disks and the like. These functional components are referred to as resources available to computers for carrying out their computations. All the hardware units interact with each other in terms of electric signals (i.e. voltage and current) usually coded into binary format (i.e. 0 and 1) in digital computers, in a very complex way.
  • In order to interact with the computer hardware and get a computational job executed by it, the job needs to be translated in this binary form called machine language. Thus, the instructions and data of the job must be converted into some binary form, which then must be stored into the computer’s main memory. The CPU must be directed at this point, to execute the instructions loaded in the memory. A computer, being a machine after all, does not do anything by itself. Which resource is to be allocated to which program, when and how, is decided by the operating system in such a way that the resources are utilized optimally and efficiently.

 

 

Operating Systems Classification

The variations and differences in the nature of different operating systems may give the impression that all operating systems are absolutely different from each-other. But this is not true. All operating systems contain the same components whose functionalities are almost the same. For instance, all the operating systems perform the functions of storage management, process management, protection of users from one-another, etc. The procedures and methods that are used to perform these functions might be different but the fundamental concepts behind these techniques are just the same. Operating systems in general, perform similar functions but may have distinguishing features. Therefore, they can be classified into different categories on different bases. Let us quickly look at the different types of operating systems.

 

Single User – Single Processing System

alt

The simplest of all the computer systems is a single use-single processor system. It has a single processor, runs a single program and interacts with a single user at a time. The operating system for this system is very simple to design and implement. However, the CPU is not utilized to its full potential, because it sits idle for most of the time. Figure 2

2

 

Figure 2 Single user – single processor system

In this configuration, all the computing resources are available to the user all the time. Therefore, operating system has very simple responsibility. A representative example of this category of operating system is MS-DOS.

 

Batch Processing Systems

  • The main function of a batch processing system is to automatically keep executing one job to the next job in the batch (Figure 3). The main idea behind a batch processing system is to reduce the interference of the operator during the processing or execution of jobs by the computer. All functions of a batch processing system are carried out by the batch monitor. The batch monitor permanently resides in the low end of the main store. The current jobs out of the whole batch are executed in the remaining storage area. In other words, a batch monitor is responsible for controlling all the environment of the system operation. The batch monitor accepts batch initiation commands from the operator, processes a job, and performs the job of job termination and batch termination.
  • In a batch processing system, we generally make use of the term ‘turn around time’. It is defined as the time from which a user job is given to the time when its output is given back to the user. This time includes the batch formation time, time taken to execute a batch, time taken to print results and the time required to physically sort the printed outputs that belong to different jobs. As the printing and sorting of the results is done for all the jobs of batch together, the turn around time for a job becomes the function of the execution time requirement of all jobs in the batch. You can reduce the turn around time for different jobs by recording the jobs or faster input output media like magnetic tape or disk surfaces. It takes very less time to read a record from these media. For instance, it takes round about five milliseconds for a magnetic ‘tape’ and about one millisecond for a fast fixed-head disk in comparison to a card reader or printer that takes around 50-100 milliseconds. Thus, if you use a disk or tape, it reduces the amount of time the central processor has to wait for an input output operation to finish before resuming processing. This would reduce the time taken to process a job which indirectly would bring down the turn-around times for all the jobs in the batch.

3

 

Figure 3 Batch processing system

  • Another term that is commonly used in a batch processing system is Job Scheduling. Job scheduling is the process of sequencing jobs so that they can be executed on the processor. It recognizes different jobs on the basis of first-come-first-served (FCFS) basis. It is because of the sequential nature of the batch. The batch monitor always starts the next job in the batch. However, in exceptional cases, you could also arrange the different jobs in the batch depending upon the priority of each batch.  Sequencing of jobs according to some criteria requires scheduling the jobs at the time of creating or executing a batch.
  • On the basis of relative importance of jobs, certain ‘priorities’ could be set for each batch of jobs. Several batches could be formed on the same criteria of priorities. So, the batch having the highest priority could be made to run earlier than other batches. This would give a better turn around service to the selected jobs.
  • Now, we discuss the concept of storage management. At any point of time, the main store of the computer is shared by the batch monitor program and the current user job of a batch. The big question that comes in our mind is-how much storage has to be kept for the monitor program and how much has to be provided for the user jobs of a batch. However, if too much main storage is provided to the monitor, then the user programs will not get enough storage. Therefore, an overlay structure has to be devised so that the unwanted sections of monitor code don’t occupy storage simultaneously.
  • Next we will discuss the concept of sharing and protection. The efficiency of utilization of a computer system is recognized by its ability of sharing the system’s hardware and software resources amongst its users. Whenever, the idea of sharing the system resources comes in your mind certain doubts also arise about the fairness and security of the system. Every user wants that all his reasonable requests should be taken care of and no intentional and unintentional acts of other users should fiddle with his data. A batch processing system guarantees the fulfillment of these user requirements.
  • All the user jobs are performed one after the other.  There is no simultaneous execution of more than one job at a time. So, all the system resources like storage IO devices, central processing unit, etc. are shared sequentially or serially. This is how sharing of resources is enforced on a batch processing system. Now, arises the question of protection. Though all the jobs are processed simultaneously, this too can lead to loss of security or protection. Let us suppose that there are two users A and B. User A creates a file of his own. User B deletes the file created by User A. There are so many other similar instances that can occur in our day to day life. So, the files and other data of all the users should be protected against unauthorized usage. In order to avoid such loss of protection, each user is bound around certain rules and regulations. This takes the form of a set of control statements, which every user is required to follow.

 

Multiprogramming Operating System

The objective of a multiprogramming operating system is to increase the system utilization efficiency. The batch processing system tries to reduce the CPU idle time through operator interaction. However, it cannot reduce the idle time due to IO operations. So, when some IO is being performed by the currently executing job of a batch, the CPU sits idle without any work to do. Thus, the multiprogramming operating system tries to eliminate such idle times by providing multiple computational tasks for the CPU to perform. This is achieved by keeping multiple jobs in the main store. So, when the job that is being currently executed on the CPU needs some IO, the CPU passes its requirement over to the IO processor. Till the time the IO operation is being carried out, the CPU is free to carry out some other job. The presence of independent jobs guarantees that the CPU and IO activities are totally independent of each other. However, if it was not so, then it could lead to some erroneous situations leading to some time-dependent errors.

 

Some of the most popular multiprogramming operating systems are:

UNIX, VMS, WindowsNT etc.

A multiprogramming supervisor has a very difficult job of managing all the activities that take place simultaneously in the system. He has to monitor many different activities and react to a large number of different situations in the course of working. The multiprogramming supervisor has to look through the following control functions:


Time Sharing or Multitasking System

  • Time sharing, or multitasking, is a logical extension of multiprogramming. Multiple jobs are executed by the CPU switching between them, but the switches occur so frequently that the users may interact with each program while it is running.
  • An interactive, or hands-on, computer system provides on-line communication between the user and the system. The user gives instructions to the operating system or to a program directly, and receives an immediate response.
  • Usually, a keyboard is used to provide input, and a display screen (such as a cathode-ray tube (CRT), or monitor) is used to provide output. When the operating system finishes the execution of one command, it seeks the next “control statement" not from a card reader, but rather from the user's keyboard. The user gives a command, waits for the response, and decides on the next command, based on the result of the previous one. The user can easily experiment, and can see results immediately. Most systems have an interactive text editor for entering programs, and an interactive debugger for assisting in debugging programs.
  • If users are to be able to access both data and code conveniently, an on-line file system must be available. A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data. Data files may be numeric, alphabetic, or alphanumeric. Files may be free-form, such as text files, or may be rigidly formatted. In general, a file is a sequence of bits, bytes, lines, or records whose meaning is defined by its creator and user. The operating system implements the abstract concept of a file by managing mass-storage devices, such as tapes and disks. Files are normally organized into logical clusters, or directories, which make them easier to locate and access. Since multiple users have access to files, it is desirable to control by whom and in what ways files may be accessed. Batch systems are appropriate for executing large jobs that need little interaction. The user can submit jobs and return later for the results; it is not necessary for the user to wait while the job is processed.
  • Interactive jobs tend to be composed of many short actions, where the results of the next command may be unpredictable. The user submits the command and then waits for the results. Accordingly, the response time should be short—on the order of seconds at most.
  • An interactive system is used when a short response time is required. Early computers with a single user were interactive systems. That is, the entire system was at the immediate disposal of the programmer/operator. This situation allowed the programmer great flexibility and freedom in program testing and development.  But, as we saw, this arrangement resulted in substantial idle time while the CPU waited for some action to be taken by the programmer/operator. Because of the high cost of these early computers, idle CPU time was undesirable. Batch operating systems were developed to avoid this problem. Batch systems improved system utilization for the owners of the computer systems.
  • Time-sharing systems were developed to provide interactive use of a computer system at a reasonable cost. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer.
  • Each user has at least one separate program in memory. A program that is loaded into memory and is executing is commonly referred to as a process. When a process executes, it typically executes for only a short time before it either finishes or needs to perform I/O. I/O may be interactive; that is, output is to a display for the user and input is from a user keyboard.
  • Since interactive I/O typically runs at people speeds, it may take a long time to complete. Input, for example, may be bounded by the user's typing speed; five characters per second is fairly fast for people, but is incredibly slow for computers. Rather than let the CPU sit idle when this interactive input takes place, the operating system will rapidly switch the CPU to the program of some other user.
  • A time-shared operating system allows the many users to share the computer simultaneously. Since each action or command in a time-shared system tends to be short, only a little CPU time is needed for each user. As the system switches rapidly from one user to the next, each user is given the impression that she has her own computer, whereas actually one computer is being shared among many users.
  • The idea of time-sharing was demonstrated as early as 1960, but since time-shared systems are difficult and expensive to build, they did not become common until the early 1970s. As the popularity of time-sharing has grown, researchers have attempted to merge batch and time-shared systems.  Many computer systems that were designed as primarily batch systems have been modified to create a time-sharing subsystem.  For example, IBM's OS/360, a batch system, was modified to support the time-sharing option (TSO). At the same time, time-sharing systems have often added a batch subsystem. Today, most systems provide both batch processing and time sharing, although their basic design and use tends to be one or the other type.
  • Time-sharing operating systems are even more complex than are multi-programmed operating systems. As in multiprogramming, several jobs must be kept simultaneously in memory, which requires some form of memory management and protection. So that a reasonable response time can be obtained, jobs may have to be swapped in and out of main memory.
  • Many universities and businesses have large numbers of workstations tied together with local-area networks. As PCs gain more sophisticated hardware and software, the line dividing the two categories is blurring.

 

Parallel Systems

  • Most systems to date are single-processor systems; that is, they have only one main CPU. However, there is a trend toward multiprocessor systems. Such systems have more than one processor in close communication, sharing the computer bus, the clock, and sometimes memory and peripheral devices. These systems are referred to as tightly coupled systems.
  • There are several reasons for building such systems. One advantage is increased throughput.  By increasing the number of processors, we hope to get more work done in a shorter period of time. The speed-up ratio with n processors is not n, however, but rather is less than n. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus contention for shared resources lowers the expected gain from additional processors. Similarly, a group of n programmers working closely together does not result in n times the amount of work being accomplished.
  • Multiprocessors can also save money compared to multiple single systems because the processors can share peripherals, cabinets, and power supplies. If several programs are to operate on the same set of data, it is cheaper to store those data on one disk and to have all the processors share them, rather than to have many computers with local disks and many copies of the data.
  • Another reason for multiprocessor systems is that they increase reliability. If functions can be distributed properly among several processors,  then the failure of one processor will not halt the system, but rather will only slow it down. If we have 10 processors and one fails, then each of the remaining nine processors must pick up a share of the work of the failed processor.  Thus, the entire system runs only 10 percent slower, rather than failing altogether.
  • This ability to continue providing service proportional to the level of surviving hardware is called graceful degradation. Systems that are designed for graceful degradation are also called fault-tolerant.
  • Continued operation in the presence of failures requires a mechanism to allow the failure to be detected, diagnosed, and corrected (if possible). The Tandem system uses both hardware and software duplication to ensure continued operation despite faults. The system consists of two identical processors, each with its own local memory. The processors are connected by a bus. One processor is the primary, and the other is the backup. Two copies are kept of each process; one on the primary machine and the other on the backup. At fixed checkpoints in the execution of the system, the state information of each job (including a copy of the memory image) is copied from the primary machine to the backup. If a failure is detected, the backup copy is activated, and is restarted from the most recent checkpoint.
  • This solution is obviously an expensive one, since there is considerable hardware duplication. The most common multiple-processor systems now use the symmetric multi-processing model, in which each processor runs an identical copy of the operating system, and the copies communicate with one another as needed. Some systems use symmetric multiprocessing, in which each processor is assigned a specific task.  A master processor controls the system; the other processors either look to the master for instruction or have predefined tasks. This scheme defines a master-slave relationship. The master processor schedules and allocates work to the slave processors.
  • An example of the symmetric multiprocessing system is Encore's version of UNIX for the Multimax computer. This computer can be configured to employ dozens of processors, all running a copy of UNIX. The benefit of this model is that many processes can run at once (N processes if there are N CPUs) without causing a deterioration of performance. However, we must carefully control I/O to ensure that data reach the appropriate processor. Also, since the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies. To avoid these inefficiencies, the processors can share certain data structures. A multiprocessor system of this form will allow jobs and resources to be shared dynamically among the various processors, and can lower the variance among the systems. However, such a system must be written carefully.
  • Asymmetric multiprocessing is more common in extremely large systems, where one of the most time-consuming activities is simply processing I/O. In older batch systems, small processors, located at some distance from the main CPU, were used to run card readers and line printers and to transfer these jobs to and from the main computer.  These locations are called remote-job-entry (RJE) sites.  In a time-sharing system, a main I/O activity is processing the I/O of characters between the terminals and the computer. If the main CPU must be interrupted for every character for every terminal, it may spend all its time simply processing characters.  So that this situation is avoided, most systems have a separate front-end processor that handles the entire terminal I/O.
  • For example, a large IBM system might use an IBM Series/I minicomputer as a front-end. The front-end acts as a buffer between the terminals and the main CPU, allowing the main CPU to handle lines and blocks of characters, instead of individual characters. Such systems suffer from decreased reliability through increased specialization. It is important to recognize that the difference between symmetric and asymmetric multiprocessing may be the result of either hardware or software.
  • Special hardware may exist to differentiate the multiple processors, or the software may be written to allow only one master and multiple slaves. For instance, Sun's operating system SunOS Version 4 provides asymmetric multiprocessing, whereas Version 5 (Solaris 2) is symmetric.
  • As microprocessors become less expensive and more powerful, additional operating system functions are off-loaded to slave-processors, or back-ends.
  • For example, it is fairly easy to add a microprocessor with its own memory to manage a disk system. The microprocessor could receive a sequence of requests from the main CPU and implement its own disk queue and scheduling algorithm. This arrangement relieves the main CPU of the overhead of disk scheduling. PCs contain a microprocessor in the keyboard to convert the key strokes into codes to be sent to the CPU. In fact, this use of microprocessors has become so common that it is no longer considered multiprocessing.

 

Distributed Systems

  • A recent trend in computer systems is to distribute computation among several processors. In contrast to the tightly coupled systems, the processors do not share memory or a clock. Instead, each processor has its own memory and clock. The processors communicate with one another through various communication lines, such as high-speed buses or telephone lines.
  • These systems are usually referred to as loosely coupled systems, or distributed systems. The processors in a distributed system may vary in size and function. They may include small microprocessors, workstations, minicomputers, and large general-purpose computer systems. These processors are referred to by a number of different names, such as sites, nodes, computers, and so on, depending on the context in which they are mentioned.

 

There are a variety of reasons for building distributed systems, the major ones being:


  • Resource sharing. If a number of different sites (with different capabilities) are connected to one another, then a user at one site may be able to use the resources available at another.  For example, a user at site A may be using a laser printer available only at site B. Meanwhile, a user at B may access a file that resides at A. In general, resource sharing in a distributed system provides mechanisms for sharing files at remote sites, processing information in a distributed database, printing files at remote sites, using remote specialized hardware devices (such as a high-speed array processor), and performing other operations.
  • Computation speedup. If a particular computation can be partitioned into a number of sub computations that can run concurrently, then a distributed system may allow us to distribute the computation among the various sites — to run that computation concurrently.  In addition, if a particular site is currently overloaded with jobs, some of them may be moved to other, lightly loaded, sites. This movement of jobs is called load sharing.
  • Reliability. If one site fails in a distributed system, the remaining sites can potentially continue operating. If the system is composed of a number of large autonomous installations (that is, general-purpose computers), the failure of one of them should not affect the rest.  If, on the other hand, the system is composed of a number of small machines, each of which is responsible for some crucial system function (such as terminal character I/O or the file system), then a single failure may effectively halt the operation of the whole system. In general, if sufficient redundancy exists in the system (in both hardware and data), the system can continue with its operation, even if some of its sites have failed.
  • Communication. "There are many instances in which programs need to exchange data with one another on one system. Window systems are one example, since they frequently share data or transfer data between displays. When many sites are connected to one another by a communication network, the processes at different sites have the opportunity to exchange information.  Users may initiate file transfers or communicate with one another via electronic mail. A user can send mail to another user at the same site or at a different site.

 

Real Time Systems

  • Another form of a special-purpose operating system is the real-time system. A real-time system is used when there are rigid time requirements on the operation of a processor or the flow of data, and thus is often used as a control device in a dedicated application.
  • Sensors bring data to the computer. The computer must analyze the data and possibly adjust controls to modify the sensor inputs. Systems that control scientific experiments, medical imaging systems, industrial control systems, and some display systems are real-time systems.  Also included are some automobile-engine fuel-injection systems, home-appliance controllers, and weapon systems.
  • A real-time operating system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail. For instance, it would not do for a robot arm to be instructed to halt after it had smashed into the car it was building. A real-time system is considered to function correctly only if it returns the correct result within any time constraints.  Contrast this requirement to a time-sharing system, where it is desirable (but not mandatory) to respond quickly, or to a batch system, where there may be no time constraints at all.
  • There are two flavors of real-time systems. A hard real-time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time that it takes the operating system to finish any request made of it. Such time constraints dictate the facilities that are available in hard real-time systems. Secondary storage of any sort is usually limited or missing, with data instead being stored in short-term memory, or in read-only memory (ROM). ROM is located on nonvolatile storage devices that retain their contents even in the case of electric outage; most other types of memory are volatile.
  • Most advanced operating-system features are absent too, since they tend to separate the user further from the hardware, and that separation results in uncertainty about the amount of time an operation will take. For instance, virtual memory is almost never found on real-time systems. Therefore, hard real-time systems conflict with the operation of time-sharing systems, and the two cannot be mixed. Since none of the existing general-purpose operating systems support hard real-time functionality, we do not concern ourselves with this type of system in this text.

 

  • A less restrictive type of real-time system is a soft real-time system, where a critical real-time task gets priority over other tasks, and retains that priority until it completes.

 

 

 


Like it on Facebook, Tweet it or share this article on other bookmarking websites.

No comments