Device Management Functions

There are a variety of input-output devices that can be attached to a computer system. They vary greatly in many of their aspects. They may be character devices (i.e. transferring a single character at a time) or block devices (i.e. transferring a chunk of characters at a time). The speeds of operation of these devices vary considerably. General types of I/O devices include secondary storage devices (disks, tapes, CD-ROMs etc.), human-interface devices (monitor, keyboard, mouse etc.), transmission devices (network cards, modems), multimedia devices (sound card, MIDI devices) etc. Each of these devices requires a different mechanism for their control by operating system. Therefore, the control of these devices poses a major concern to the operating systems.

Let us see what are the elements of an I/O device and how they interact with the computer. As also what are the related responsibilities of an operating system. A device communicates with a computer system by sending signals over a transmission medium (cable etc.). They are connected to the hardware of the computer via hardware-interface called port. The port is equipped with supporting electronics for hardware interfacing and low-level control. This hardware unit is called device controller. Thus, each device has corresponding device controller.

I/O device and controllers

A general-purpose computer system consists of a CPU and a number of device controllers that are connected through a common bus. Each device controller is in charge of a specific type of device. Depending on the controller, there may be more than one attached device. For instance, the Small Computer Systems Interface (SCSI) controller, found on many small to medium-sized computers, can have seven or more devices attached to it. A device controller maintains some local buffer storage and a set of special purpose registers. The device controller is responsible for moving the data between the peripheral devices that it controls and its local buffer storage. The size of the local buffer within a device controller varies from one controller to another, depending on the particular device being controlled. For example, the size of the buffer of a disk controller is the same as or a multiple of the size of the smallest addressable portion of a disk, called a sector, which is usually 512 bytes.

Interrupts Handlers

To start an I/O operation, the CPU loads the appropriate registers within the device controller. The device controller, in turn, examines the contents of these registers to determine what action to take. For example, if it finds a read request, the controller will start the transfer of data from the device to its local buffer. Once the transfer of data is complete, the device controller informs the CPU that it has finished its operation. It accomplishes this communication by triggering an interrupt.

This situation will occur, in general, as the result of a user process requesting I/O. Once the I/O is started, two courses of action are possible.  In the simplest case, the I/O is started; then, at I/O completion, control is returned to the user process. This case is known as synchronous I/O. The other possibility called asynchronous I/O, returns control to the user program without waiting for the I/O to complete. The I/O then can continue while other system operations occur.

Waiting for I/O completion may be accomplished in one of two ways. Some computers have a special wait instruction that idles the CPU until the next interrupt. Machines that do not have such an instruction may have a wait loop:

Loop: jmp Loop

This tight loop simply continues until an interrupt occurs, transferring control to another part of the operating system. Such a loop might also need to poll any I/O devices that do not support the interrupt structure; instead, they simply set a flag in one of their registers and expect the operating system to notice that flag.

If the CPU always waits for I/O completion, at most one I/O request is outstanding at a time. Thus, whenever an I/O interrupt occurs, the operating system knows exactly which device is interrupting. On the other hand, this approach excludes concurrent I/O operations to several devices, and also excludes the possibility of overlapping useful computation with I/O.

A better alternative is to start the I/O and then to continue processing other operating system or user program code. A system call (a request to the operating system) is then needed to allow the user program to wait for I/O completion, if desired. In case no user programs are ready to run, and the operating system has no other work to do, we still require the wait instruction or idle loop, as before. We also need to be able to keep track of many I/O requests at the same time. For this purpose, the operating system uses a table containing an entry for each I/O device: the device-status table. Each table entry indicates the device's type, address, and state (not functioning, idle, or busy). If the device is busy with a request, the type of request and other parameters will be stored in the table entry for that device. Since it is possible for other processes to issue requests to the same device, the operating system will also maintain a wait queue—a list of waiting requests—for each I/O device.

An I/O device interrupts when it needs service. When an interrupt occurs, the operating system first determines which I/O device caused the interrupt. It then indexes into the I/O device table to determine the status of that device, and modifies the table entry to reflect the occurrence of the interrupt. For most devices, an interrupt signals completion of an I/O request. If there are additional requests waiting in the queue for this device, the operating system starts processing the next request.

Finally, control is returned from the I/O interrupt. If a process was waiting for this request to complete (as recorded in the device-status table), we can now return control to it. Otherwise, we return to whatever we were doing before the I/O interrupt: to the execution of the user program (the program started a I/O operation and that operation has now finished, but the program has not yet waited for the operation to complete) or to the wait loop (the program started two or more I/O operations and is waiting for a particular one to finish, but the interrupt was from one of the others). In a time-sharing system, the operating system could switch to another ready-to-run process.

The schemes used by some input devices may vary from this one. Many interactive systems allow users to type ahead, or to enter data before the data are requested, on the terminals. In this case, interrupts may occur, signaling the arrival of characters from the terminal, while the device-status block indicates that no program has requested input from this device.  If type ahead is to be allowed, then a buffer must be provided to store the type ahead characters until some program wants them. In general, we may need a buffer for each input terminal.

The main advantage of asynchronous I/O is the increased system efficiency. While I/O is taking place, the system CPU can be used for processing or starting l/0s to other devices. Because I/O can be slow compared to processor speed the system makes much better use of its facilities.

 

Device independent I/O software

Application programs can directly interact with devices trough system call provided by kernel of the operating system. In this case, however, the responsibility of controlling the communication with the device lies completely with the programmer. So also, the error handling is responsibility of the programmer.

The availability of wide range of I/O devices makes it difficult, if not impossible for the programmer to learn specific commands and interact with the desired devices. Alternately, the operation system may implement device encapsulation, which is an abstraction of a device with no specific commands attached to it.

This concept allows applications programmes to communicate with any device in a similar manner irrespective of the make and nature of the devices. The application programmers has to input or output data to or from a logical device. This logical device presents a virtual representation of any devised attached to operating system. The user is oblivious of internal implementation and low-level communication details.

The goals of Device Independent I/O software are:

  • Uniform interfacing with device drivers: Each device should be communicated through similar type of interface. In the Figure 1 (a), the devices have device drivers with different connectivity. However, in Figure 1 (b) all the device drivers have the same type of connectivity. The device independent I/O has the device drivers implemented as in the later figure.

1_copy

Figure 1 (a) Without a standard driver interface (b) With a standard driver interface

  • Buffering : The devices should be able to be flexible to be buffered either in kernel area or in the user area or both and with multiplicity as shown in Figure 2 (a), (b), (c), (d).


 

2_copy

Figure 2

 

  • Error reporting: Error reporting and error handling should be uniform for all the devices.
  • Providing device independent block-size: the data block-side should not dependent on individual devices.

UNIX implements device independence in I/O using a similar concept. It treats every I/O device the same way – as byte stream. Thus, the screen is treated the same way as a file, its name being stdout; Keyboard as a file named stdin. Every I/O device is treated as file.

User–Space I/O Software

Input/output functions can be implemented in various ways by the operating systems. Some of the more popular implementation include:

1.     Interrupt Drive I/O

2.     Programmed I/O

3.     Direct Memory Access (DMA)

4.     Memory Mapped I/O

5.     Kernel Space I/O

6.     User Space I/O

An Input/Output software which is implemented in the user memory space is non as user space I/O. User space I/O has lot of design considerations to care for. The application software has to provide for buffering, I/O timing, interrupt handling and other I/O  related issues.

Disk Scheduling

Disks provide the bulk of secondary storage for modern computer systems. Magnetic tape was used as an early secondary-storage medium, but the access time is much slower than for disks.  Thus, tapes are currently used mainly for backup, for storage of infrequently used information, as a medium for transferring information from one system to another, and for storing quantities of data that are impractically large for disk systems.  Modern disk drives are addressed as large one-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer. The size of a logical block is usually 512 bytes, although some disks can be low-level formatted to choose a different logical block size, such as 1024 bytes.

The one-dimensional array of logical blocks is mapped onto the sectors of the disk sequentially. Sector 0 is the first sector of the first track on the outermost cylinder. The mapping then proceeds in order through that track, then through the rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost.

By using this mapping, it should be possible to convert a logical block number into an old-style disk address that consists of a cylinder number, a track number within that cylinder, and a sector number within that track. In practice, it is difficult to perform this translation, for two reasons. First, most disks have some defective sectors, but the mapping hides this by substituting spare sectors from elsewhere on the disk. Second, the number of sectors per track is not a constant. The farther a track is from the center of the disk, the greater its length so the more sectors it can hold. Thus, modern disks are organized into zones of cylinders. The number of sectors per track is constant within a zone. But as we move from inner zones to outer zones, the number of sectors per track increases. Tracks in the outermost zone typically hold 40 percent more sectors than do tracks in the innermost zone.

The number of sectors per track has been increasing as disk technology improves, and it is common to have more than 100 sectors per track in the outer zone of a disk. Similarly, the number of cylinders per disk has been increasing, several thousand cylinders is not unusual. One of the responsibilities of the operating system is to use the hardware efficiently. For the disk drives, this means having a fast access time and more bandwidth.

The access time has two major components. The seek time is the time for the disk arm to move the heads to the cylinder containing the desired sector. The rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk head. The disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer. We can improve both the access time and the bandwidth by scheduling the servicing of disk I/O requests in a good order.

As we discussed earlier, whenever a process needs I/O to or from the disk, it issues a system call to the operating system. The request specifies several pieces of information:

  • Whether this operation is input or output
  • What the disk address for the transfer is
  • What the memory address for the transfer is
  • What the number of bytes to be transferred is

If the desired disk drive and controller are available, the request can be serviced immediately. If the drive or controller is busy, any new requests for disk service will need to be placed on the queue of pending requests for that drive. In a multiprogramming system with many processes, the disk queue may have several pending requests. Thus, when one request is completed, the operating system has an opportunity to choose which pending request to service next.

FCFS (First Come First Serve) Scheduling

The sectors are sought in the order of their occurrence.

It is the simplest form of disk scheduling. This algorithm is intrinsically fair, but it generally does not provide the fastest service. Consider, for example, a disk queue with requests for I/O to blocks on 98, 183,37, 122, 14, 124, 65, 67 in that order. If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to 183, 37, 122, 14, 124, 65, and finally to 67, for a total head movement of 640 cylinders.

The problem with this schedule is illustrated by the wild swing from 122 to Hand then back to 124. If the requests for cylinders 37 and 14 could be serviced together, before or after the requests at 122 and 124, the total head movement could be decreased substantially, and performance could be thereby improved.

SSTF Scheduling

It seems reasonable to service all the requests close to the current head position before moving the head far away to service other requests. This assumption is the basis for the shortest-seek-time-first (SSTF) algorithm. The SSTF algorithm selects the request with the minimum seek time from the current head position.

Since seek time increases with the number of cylinders traversed by the head SSTF chooses the pending request closest to the current head position. For our example request queue, the closest request to the initial h position (53) is at cylinder 65.  Once we are at cylinder 65, the next do request is at cylinder 67. From there, the request at cylinder 37 is closer to 98, so 37 is served next. Continuing, we service the request at cylinder 14. 98, 122, 124, and finally at 183.

This algorithm gives a improvement in performance. SSTF scheduling is essentially a form of shortest-job-first (SJF) scheduling and, like SJF scheduling, it may cause starvation of some requests.

SCAN Scheduling

In the SCAN algorithm, the disk arm starts at one end of the disk, and moves toward the other end, servicing requests as it reaches each cylinder, until it gets to the other end of the disk. At the other end, the direction of head movement is reversed, and servicing continues.

The head continuously scans back and forth the disk. We again use our example. Before applying SCAN to schedule the requests on cylinders 98, 183, 37, 122,14, 124, 65, and 67, we need to know the direction of head movement, in addition to the head's current position (53). If the disk arm is moving toward 10, the head will service 37 and then 14.  At cylinder 0, the arm will reverse and will move toward the other end of the disk, servicing the requests at 65, 67, 98, 122,124, and 183.

If a request arrives in the queue just in front of the head, it will be serviced almost immediately; a request arriving just behind the head will have to wait until the arm moves to the end of the disk, reverses direction, and comes back.

The SCAN algorithm is sometimes called the elevator algorithm, since the disk arm behaves just like an elevator in a building, first servicing all the requests going up, and then reversing to service requests the other way.

 

C-SCAN Scheduling

Circular SCAN (C-SCAN) is a variant of SCAN that is designed to provide a much uniform wait time. Like SCAN, C-SCAN moves the head from one end of the disk to the other, servicing requests along the way. When the head reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip. The C-SCAN scheduling algorithm essentially treats the cylinders as a circular list that wraps around.

LOOK Scheduling

Notice that, as we described them, both SCAN and C-SCAN move the disk across the full width of the disk. In practice, neither algorithm is implemented this way. More commonly, the arm goes only as far as the first request in a direction.  Then, it reverses direction immediately, without first going all the way to the end of the disk.

Selection of a Disk-Scheduling Algorithm

Given so many disk-scheduling algorithms, how do we choose the best?  SSTF is common and has a natural appeal. SCAN and C-SCAN perform better in systems that place a heavy load on the disk, because they are less likely to have the starvation problem described earlier. For any particular list of requests, it is possible to define an optimal order of retrieval, but the computation needed to find an optimal schedule may not justify the savings over SSTF or SCAN.

With any scheduling algorithm, however, performance depends heavily on the number and types of requests. For instance, suppose that the queue usually has just one outstanding request. Then, all scheduling algorithms are forced to behave the same, because they have only one choice for where to move the disk head. They all behave like FCFS scheduling. Notice also that the requests for disk service can be greatly influenced by the file-allocation method.  A program reading a contiguously allocated file will generate several requests that: are close together on the disk, resulting in limited head movement.  A linked or indexed file, on the other hand, may include blocks that are widely scattered on the disk, resulting in greater head movement.

The location of directories and index blocks also is important. Since every file must  be  opened  to  be  used,  and  opening  a  file  requires  searching  the directory structure, the directories will be accessed frequently.   Suppose a directory entry is on the first cylinder and a file's data are on the final cylinder. In this case, the disk head has to move the entire width of the disk.  If the directory entry were on the middle cylinder, the head has to move at most one half the width. Caching the directories and index blocks in main memory can also help to reduce the disk-arm movement, particularly for read requests. Because of these complexities, the disk-scheduling algorithm should be written as a separate module of the operating system, so that it can be replaced with a different algorithm if necessary.  Either SSTF or LOOK is a reasonable choice for the default algorithm.

Note that the scheduling algorithms described here consider only the seek distances. For modern disks, the rotational latency can be nearly as large as the average seek time. But it is difficult for the operating system to schedule for improved rotational latency because modern disks do not disclose the physical location of logical blocks. Disk manufacturers have been helping with this problem by implementing disk-scheduling algorithms in the controller hardware built into the disk drive. If the operating system, sends a batch of requests to the controller, the controller can queue them and then schedule them to improve both the seek time and the rotational latency. If I/O performance were the only consideration, the operating system would gladly turn over the responsibility of disk scheduling to the disk hardware. In practice, however, the operating system may have other constraints on the service order for requests.

For instance, demand paging may take priority over application I/O, and writes are more urgent than reads if the cache is running out of free pages. Also, it may be desirable to guarantee the order of a set of disk writes to make the file system robust in the face of system crashes; consider what could happen if the operating system allocated a disk page to a file, and the application wrote data into that page before the operating system had a chance to flush the modified inode and free-space list back to disk.  To accommodate such requirements, an operating system may choose to do its own disk scheduling, and to "spoon-feed" the requests to the disk controller, one by one.

 

 


Like it on Facebook, Tweet it or share this article on other bookmarking websites.

No comments