Tuesday, December 21, 2010

I/O devices and File systems

I/O Device Characteristics:

Terminal.

  • One character (8 bits of data or control function) is sent at a time, one interrupt per character.
  • 10-1800 characters per second.
  • Keyboard and display are independent in most systems (no automatic echo, full duplex).
  • Usually handled with one interrupt per character, but sometimes DMA nowadays.

Raster Printers.

  • Bit mapped (one bit for each dot).
  • Typically about 300 to 1200 dots per inch.
  • Impact, ink jet, or laser printing.

Tape.

  • 8mm wide by 750 feet long, or 1/2" wide by 2500 feet long..
  • Multi-track, helical, serpentine.
  • Variable length records.
  • Capacities go up to 70 GB/tape.
  • Densities of more than 30 KB/cm.
  • Inter-record gaps ofs about 10mm.
  • Transfer rates up to 5-500 MB/second.
  • Can read or write, but cannot write in middle. Can skip records.
  • Tapes are DMA devices, not one interrupt per character.

Disk.

  • Draw picture of spindle, platters, read/write-arm.
  • Typically about 1024 cylinders, 20 tracks per cylinder, 32 sectors per track, 512 bytes per sector.
  • This is a total of about 1 GB per disk, or 500,000 double-spaced typewritten pages. Typically sectors are fixed length (512-4096 bytes are popular sizes).
  • Sectors can be read and written individually, or in adjacent groups.
  • Seek time = 5-100 ms, latency is 0-15 ms (drive spins at 3600- 7200 RPM). This is the standard for medium term computer storage. Transfer times are about 5-20 MB/second. These times depend on drive, controller, and interface standard (IDE, SCSI).
graphics1.png

CD-ROMs.

  • Current CD-ROMS's can hold up to 720 MB (650 MB is more typical) of data or 74 minutes of uncompressed audio. The data is organized in one (or a few) continuous spirals. The block size is 2K and "tracks" vary in size from 8 to 23 blocks There is extensive error correction information encoded with the data.
  • Seek time = 100-300 ms. Transfer times are about .3-1.5 MB/second (for 2X-10X). 1X (150 KB/second) is needed for audio CD's.
  • Next generation CD (DVD) should store about 5-10 GB and be fully readable and writable. These drives will handle fully integrated data, audio, and video. There are currently several competing standards (ala the VHS vs. Beta competition of the 80's).
Disks and tapes read and write blocks of information rather than single bytes:
  • Storage efficiency: give example for tape. At 1600 bpi, 80-byte records use .05 inch, gaps use .6 inch, tape is all gap. However, 8000-byte records use 5 inches so gaps are only about 11% of the tape. In the case of disks, there are a couple of thousand bits of leader at the beginning of each sector used to identify the sector and to synchronize when reading. For 1000- byte sectors, 20% of the disk space is wasted.
  • Access efficiency: on disk it typically takes 25ms overhead before transfer begins. The actual transfer is only about 1 microsecond per byte. Thus one-byte transfers take 25ms total time/byte, 1000-byte transfers take about 25 ms total time/byte. Re-iterate the importance of eliminating seeks.
Two common I/O device access methods: DMA and CPU control

Direct Memory Access (DMA)

In simple systems, the CPU must move each data byte to or from the bus using a LOAD or STORE instruction, as if the data were being moved to memory. This quickly uses up much of the CPU's computational power. In order to allow systems to support high I/O utilization while the CPU is getting useful work done on the users' behalf, devices are allowed to directly access memory. This direct access of memory by devices (or controllers is called Direct Memory Access, commonly abbreviated DMA).
The CPU is still responsible for scheduling the memory accesses made by DMA devices, but once the program has been established, the CPU has no further involvement until the transfer is complete. Typically DMA devices will issue interrupts on I/O completion.
Because this memory is not being manipulated by the CPU, and therefore addresses may not pass through an MMU, DMA devices often confuse or are confused by virtual memory. It is important to guarantee that memory intended for use by a DMA device is not manipulated by the paging system while the I/O is being performed. Such pages are usually frozen (or pinned) to avoid changes.
In some sense DMA is simply an intermediate step to general purpose programmability on devices and device controllers. Several such smart controllers exist, with features ranging from bit swapping, to digital signal processing, checksum calculations, encryption and compression and general purpose processors. Dealing with that programmability requires synchronization and care. Moreover, in order for code to be portable, writing an interface to such smart peripherals is often a delicate balancing act between making features available and making the device unrecognizable.

I/O Software

The I/O software of the OS has several goals:
  • Device Independence: All peripherals performing the same function should have the same interface. All disks should present logical blocks. All network adapters should accept packets. The protection of devices should be managed consistently. For example devices should all be accessible by capability, or all by the file system. In practice this is mitigated by the need to expose some features of the hardware.
  • Uniform Naming: The OS needs to have a way to describe the various devices in the system so that it can administer them. Again the naming system should be as flexible as possible. Systems also have to deal with devices joining or leaving the name space (PCMCIA cards).
  • Device Sharing: Most devices are shared at some granularity by processes on a general purpose computer. It's the I/O system's job to make sure that sharing is fair (for some fairness metric) and efficient.
  • Error Handling: Devices can often deal with errors without user input - retrying a disk read or something similar. Fatal errors need to be communicated to the user in an understandable manner as well. Furthermore, although hiding errors can be good at some level, at other levels they should be seen. Users must be able to tell that their disks are slowly failing.
  • Synchrony and Asynchrony: The I/O system needs to deal with the fact that external devices are not synchronized with the internal clock of the CPU. Events on disk drives occur without any regard for the state of the CPU, and the CPU must deal with that. The I/O system code is what turns the asynchronous interrupts into system events that can be handled by the CPU.

Software Levels

Interrupt Handlers

The Interrupt Service Routines (ISRs) are short routines designed to turn the asynchronous events from devices (and controllers) into synchronous ones that the operating system can deal with in time. While an ISR is executing, some set of interrupts is usually blocked, which is a dangerous state of affairs that should be avoided as much as possible.
ISRs generally encode the information about the interrupt into some queue that the OS checks regularly - e.g. on a context switch.

Device Drivers

Device drivers are primarily responsible for issuing the low-level commands to the hardware that gets the hardware to do what the OS wants. As a result, many of them are hardware dependent.
Conceptually, perhaps the most important facet of device drivers is the conversion from logical to physical addressing. The OS may be coded in terms of logical block numbers for a file, but it is the device driver that converts such logical addresses to real physical addresses and encodes them in a form that the hardware can understand.
Device drivers may also be responsible for programming smart controllers, multiplexing requests and de-multiplexing responses, and measuring and reporting device performance.

Device Independent OS Code

This is the part of the OS we've really been talking the most about. This part of the OS provides consistent device naming and interfaces to the users. It enforces protection, and does logical level caching and buffering.
In addition to providing a uniform interface, the uniform interface is sometimes pierced at this level to expose specific hardware features -- CD audio capabilities, for instance.
The device independent code also provides a consistent error mode to users, letting them know what general errors occurred when the device driver couldn't recover.

User Code

Even the OS code is relatively rough and ready. User libraries provide simpler interfaces to I/O systems. Good examples are the standard I/O library that provides a simplified interface to the filesystem. printf and fopen are easier to use than write and open. Specifically such systems handle data formatting and buffering.
Beyond that there are user level programs that specifically provide I/O services (daemons). Such programs spool data, or directly provide the services users require.

Files, Disk Management

File: a named collection of bits stored on disk. From the OS' standpoint, the file consists of a bunch of blocks stored on the device. Programmer may actually see a different interface (bytes or records), but this does not matter to the file system (just pack bytes into blocks, unpack them again on reading).
Common addressing patterns:
  • Sequential: information is processed in order, one piece after the other. This is by far the most common mode: e.g. editor writes out new file, compiler compiles it, etc.
  • Random Access: can address any record in the file directly without passing through its predecessors. E.g. the data set for demand paging, also databases.
  • Keyed: search for records with particular values, e.g. hash table, associative database, dictionary. Usually not provided by operating system. TLB is one example of a keyed search.
Modern file systems must address four general problems:
  • Disk Management: efficient use of disk space, fast access to files, sharing of space between several users.
  • Naming: how do users select files?
  • Protection: all users are not equal.
  • Reliability: information must last safely for long periods of time.
Disk Management: how should the disk sectors be used to represent the blocks of a file? The structure used to describe which sectors represent a file is called the file descriptor.
Contiguous allocation: allocate files like segmented memory (give each disk sector a number from 0 up). Keep a free list of unused areas of the disk. When creating a file, make the user specify its length, allocate all the space at once. Descriptor contains location and size.
Figure 1
Figure 1 (s24.contig.png)
  • Advantages: easy access, both sequential and random. Simple. Few seeks.
  • Drawbacks: horrible fragmentation will preclude large files, hard to predict needs. With interleaved user requests, still cannot eliminate all seeks.
Linked files: In file descriptor, just keep pointer to first block. In each block of file keep pointer to next block. It can also keep a linked list of free blocks for the free list.
graphics2.png
  • Advantages: files can be extended, no fragmentation problems. Sequential access is easy: just chase links.
  • Drawbacks: random access is virtually impossible. Lots of seeking, even in sequential access.
  • Example: FAT (MSDOS) file system.
Array of block pointers: file maximum length must be declared when it is created. Allocate an array to hold pointers to all the blocks, but do not allocate the blocks. Then fill in the pointers dynamically using a free list.
graphics3.png
  • Advantages: not as much space wasted by overpredicting, both sequential and random accesses are easy.
  • Drawbacks: still have to set maximum file size, and there will be lots of seeks.
DOS FAT allocation table: A single File Allocation Table (FAT) that combines free list info and file allocation info. In file descriptor, keep pointer to first block. A FAT table entry contains either (1) the block number of the next block in the file, (2) a distinguished "end of file" (eof) value, or (3) a distinguished "free" value.
graphics4.png
  • Advantages/Disadvantages: similar to those mentioned above for linked file.
None of these is a very good solution: what is the answer? First, and MOST IMPORTANT: understand the application. How are file systems used?
  • Most files are small.
  • Much of the disk is allocated to large files.
  • Many of the I/O's are made to large files.
Thus, the per-file cost must be low, but the performance of large files must be good. File systems must allow reasonably fast random access, extensibility.

Unix and DEMOS Disk Allocation

Storage Management: For a given file, how the does OS find the blocks contained in that file? The data structure that decribes the contents of file is generically called a file descriptor. We will see several other names, as we study about file systems.
The file descriptor information has to be stored on disk, so it will stay around even when the OS does not.
  • In Unix, all the descriptors are stored in a fixed size array on disk. The descriptors also contain protection and accounting information.
  • A special area of disk is used for this (disk contains two parts: the fixed-size descriptor array, and the remainder, which is allocated for data and indirect blocks).
  • The size of the descriptor array is determined when the disk is initialized, and cannot be changed. In Unix, the descriptor is called an i-node, and its index in the array is called its i-number. Internally, the OS uses the i-number to refer to the file.
  • When a file is open, its descriptor is kept in main memory. When the file is closed, the descriptor is stored back to disk.
graphics5.png
The Typical Unix Inode
  • File descriptors: 13 block pointers. The first 10 point to data blocks, the next three to indirect, doubly-indirect, and triply-indirect blocks (256 pointers in each indirect block). Maximum file length is fixed, but large. Descriptor space is not allocated until needed.
  • Examples: block 23, block 5 block 340
  • Free blocks: stored on a free list in no particular order.
  • Go through examples of allocation and freeing.
  • Advantages: simple, easy to implement, incremental expansion, easy access to small files.
  • Drawbacks:
    • Indirect mechanism does not provide very efficient access to large files: 3 descriptor ops for each real operation. A cache is used, but this takes up main memory space.
    • Block-by-block organization of free list means that that file data gets spread around the disk.
graphics6.png

The Demos File System

Demos was an operating system written especially for high performance systems, originally the Cray 1. Its design continues to influence systems today.
The Demos solution: allocates files contiguously, has more compact file descriptors, uses more CPU time. (refer to contiguous allocation picture in section 26).
graphics7.png
  • File descriptors: select sequences of physical blocks, called block groups, rather than single blocks. Block groups were called extents by IBM.
  • A block group has three fields:
    • Starting disk block: the starting address on disk of this block group,
    • Starting logical block: the starting block number within the file for the block group,
    • Count: the number of blocks in the group.
  • There are 10 block groups in file descriptor; if files become large, then these become pointers to groups of indirect blocks. The resulting structure is like a B-tree.
  • Free blocks: described with a bit map. Just an array of bits, one per block. 1 means block free, 0 means block allocated. For a 300 Mbyte drive there are about 300000 1kbyte blocks, so bit map takes up 40000 bytes. Keep only a small part of the bit map in memory at once. In allocation, scan bit map for adjacent free blocks.
  • Advantages:
    • It is easy to allocate block groups, since the bit map automatically merges adjacent free blocks.
    • File descriptors take up less space on disk, require fewer accesses in random access to large files.
  • Disadvantages:
    • Slightly more complex than Unix scheme: trades CPU time for disk access time (OK for CRAY-1).
    • When disk becomes full, this becomes VERY expensive, and does not get much in the way of adjacency.
Even if it is possible to allocate in groups, how do you know when to do it? Be guided by history: if file is already big, it will probably get bigger.

Crash Recovery

Computers can crash at any time, and we want the file system to behave sensibly in the face of crashes. The key idea is called consistency:
  • The file data and the various control structures (descriptors, bitmaps) must be in agreement.
  • Since crashes can occur at any time, not all updates to the disk may be completed.
  • We must insure that when the system reboots, it can return its file system to some sensible state.
  • The key constraint is that any file system write operation, in progress at the time of the crash, either completely finishes or appears as if it never happened. This is called atomicity by the database folks.
Insuring consistency requires two things:
  • Updates to the file system data structures must be done in the write order (and there is only one right order)!
  • The proper steps must be taken at reboot time to bring the system back in to a consistent state.
There are three basic updates that happen when data is written to a file.
  1. A block (or blocks) is allocated from the free list (bit map).
  2. Data is written to the newly allocated block.
  3. The inode is updated to include the new data.
These operations must be done in the above order. If they are not, then it is possible to have a data block included in a file that might have garbage (uninitialized data) in the block.
After rebooting, the recovery utility program on Unix, called "fsck", is going to traverse the entire directory structure of the disk to insure that all free blocks are in the free list.
Recovery after a crash follows these steps:
  1. Allocate a temporary bit map, initialized to indicate that all disk blocks are free.
  2. Start at the inode for the root directory.
  3. Traverse the directory:
    • For each disk data block in the directory file, marks its blocks as "allocated" in the bit map.
    • For each data file in this directory, marks its data blocks as "allocated" in the bit map.
    • For each directory in this directory, perform the "Traverse the directory" steps above.
At the completion of the algorithm, you can compare the actual bit map to the temporary one to find blocks that were allocated, but never made it into a file.

Directories

Motivation

Users need a way of finding the files that they created on disk. One approach is just to have users remember descriptor indexes.
Of course, users want to use text names to refer to files. Special disk structures called directories are used to tell what descriptor indices correspond to what names.
A hard concept to understand at the beginning: naming is one of the (if not the) most important issues in systems design.
Approach #1: have a single directory for the whole disk. Use a special area of disk to hold the directory.
  • Directory contains pairs.
  • If one user uses a name, no-one else can.
Approach #2: have a separate directory for each user (TOPS-10 approach). This is still clumsy: names from different projects get confused.

Unix Directories

Unix approach compares the directory structure to a tree.
graphics8.png
  • Directories are stored on disk just like regular files (i.e. file descriptor with 13 pointers, etc.). User programs can read directories just like any other file (try it!). Only special system programs may write directories.
  • Each directory contains pairs. The file pointed to by the index may be another directory. Hence, get hierarchical tree structure, name with /usr/local.
  • There is one special directory, called the root. This directory has no name, and is the file pointed to by descriptor 2 (descriptors 0 and 1 have other special purposes).
It is very nice that directories and file descriptors are separate, and the directories are implemented just like files. This simplifies the implementation and management of the structure (can write "normal" programs to manipulate them as files).
Working directory: it is cumbersome constantly to have to specify the full path name for all files.
  • In Unix, there is one directory per process, called the working directory, that the system remembers.
  • When it gets a file name, it assumes that the file is in the working directory. "/" is an escape to allow full path names.
  • Many systems allow more than one current directory. For example, check first in A, then in B, then in C. This set of directories is called the search path or search list. This is very convenient when working on large systems with many different programmers in different areas.
  • For example, in Unix the shell will automatically check in several places for programs. However, this is built into the shell, not into Unix, so if any other program wants to do the same, it has to rebuild the facilities from scratch.
  • This is yet another example of locality.

Windows (NT) File System

Background

The Windows file system is called NTFS, and was introduced with Windows NT 4.0 and is the standard file system on Windows 2000 and later systems, such as Windows XP. Its goal was to solve the size, performance, reliability, and flexibility limitations in the DOS (aka "FAT" file system).
It has a general similarity to the FAT file system in that all files are described in a single table, called the Master File Table (MFT). However, it has more modern characteristics in that all components are files, including:
TABLE 1
FIXME: A LIST CAN NOT BE A TABLE ENTRY. Master File Tabledata filesdirectoriesFIXME: A LIST CAN NOT BE A TABLE ENTRY. free list (bit map)boot imagesrecovery logs
The file system also has features to support redundancy and transactions, which we will not discuss. A great reference for details is the book: Inside the Windows NT File System by Helen Custer, published by (not surprisingly) Microsoft Press.

Disk Layout

Disks are divided in fixed size regions:
  • Each region is called a volume.
  • Each volume can contain a different kind of file system, such as NTFS, FAT, or even Unix.
  • Since each volume is a separate file system, it has its own root directory.
  • Multiple volumes allow for fixed limits on the growth of a particular file tree, such as limiting the size of temporary file space.
  • Multiple volumes also allow a single disk to contain multiple, separating bootable operating systems.

Master File Table (MFT)

Clusters are the key element to allocation:
  • Logically, the disk consists of allocation units called clusters.
  • A cluster is a power-of-two multiple of the physical disk block size. The cluster size is set when the disk is formatted. A small cluster provides a finer granularity of allocation, but may require more space to describe the file and more separate operations to transfer data to or from memory.
  • The free list is a bitmap, each of whose bits describe one cluster.
  • Clusters on the disk are numbered starting from zero to the maximum number of clusters (minus one). These numbers are called logical cluster numbers (LCN) and are used to name blocks (clusters) on disk.
The MFT is the major, and in some ways, the only data structure on disk:
  • All files, and therefore all objects stored on disk are described by the MFT.
  • All files are logically stored in the MFT and, for small files are physically within the bounds of the MFT. In this sense, the MFT is the file system.
  • The MFT logically can be described as a table with one row per file.
  • The first rows in the table described important configuration files, including files for the MFT itself.
graphics9.png

MFT Entries

As stated previously, each row or entry in the MFT (called a record) describes a file and logically contains the file. In the case of small files, the entry actually contains the contents of the file.
Each entry is consists of (attribute, value) pairs. While the conceptual design of NTFS is such that this set of pairs is extensible to include user-defined attributes, current versions of NTFS have a fixed set. The main attributes are:
  • Standard information: This attribute includes the information that was standard in the MS-DOS world:
    • read/write permissions,
    • creation time,
    • last modification time,
    • count of how many directories point to this file (hard link count.
  • File Name: This attribute describes the file's name in the Unicode character set. Multiple file names are possible, such as when:
    • the file has multiple links, or
    • the file has an MS-DOS short name.
  • Security Descriptor: This attribute lists which user owns the file and which users can access it (and how they can access it).
  • Data: This attribute either contains the actual file data in the case of a small file or points to the data (or points to the objects that point to the data) in the case of larger files.
graphics10.png
For small files, this design is extremely efficient. By looking no further than the MFT entry, you have the complete contents of the file.
However, the Data field gets interesting when the data contained in the file is larger than an MFT entry. When dealing with large data, the Data attribute contains pointers to the data, rather than the data itself.
  • The pointers to data are actually pointers to sequences of logical clusters on the disk.
  • Each sequence is identified by three parts:
    • starting cluster in the file, called the virtual cluster number (VCN),
    • starting logical cluster (LCN) of the sequence on disk,
    • length, counted as the number of clusters.
  • The run of clusters is called an extent, following the terminology developed by IBM in the 1960's.
  • NTFS allocates new extents as necessary. When there is no more space left in the MFT entry, then another MFT entry is allocated. This design is effectively a list of extents, rather than the Unix or DEMOS tree of extents.
graphics11.png
Directories
As with other modern file systems, a directory in NTFS is a file whose data contains a collection of name/file mappings.
  • A directory entry contains the name of the file and file reference. The file references identify the file on this volume. In other words, it is an internal name for the file.
A reference is a (file number, sequence number) pair. The file number is the offset of the file's entry in the MFT table. It is similar to the Unix inumber (Inode number).
  • The list of file names in the directories is not stored in a simple list, but rather as a lexigraphically-sorted tree, called a B+ tree (this will be familiar to those with a database background). The data structure is called an index in NFTS (again, following the terminology from databases).
  • The NTFS design specifies that an index can be constructed for any attribute, but currently only file name indices are supported.
  • The name for a file appears both in its directory entry and in the MFT entry for the file itself.
  • As with regular files, if the directory is small enough, it can fit entirely within the MFT entry.
graphics12.png
If the directory is larger, then the top part of (the B+ tree of) the directory is in the MFT entry, which points to extents that contain the rest of the name/file mappings.
graphics13.png

File System Crash Recovery

Unix File System Crash Recovery

Computers can crash at any time, and we want the file system to behave sensibly in the face of crashes. The key idea is called consistency:
  • The file data and the various control structures (descriptors, bitmaps) must be in agreement.
  • Since crashes can occur at any time, not all updates to the disk may be completed.
  • We must insure that when the system reboots, it can return its file system to some sensible state.
  • The key constraint is that any file system write operation, in progress at the time of the crash, either completely finishes or appears as if it never happened. This is called atomicity by the database folks.
Insuring consistency requires two things:
  • Updates to the file system data structures must be done in the write order (and there is only one right order)!
  • The proper steps must be taken at reboot time to bring the system back in to a consistent state.
There are three basic updates that happen when data is written to a file.
  1. A block (or blocks) is allocated from the free list (bit map).
  2. Data is written to the newly allocated block.
  3. The inode is updated to include the new data.
These operations must be done in the above order. If they are not, then it is possible to have a data block included in a file that might have garbage (uninitialized data) in the block.
After rebooting, the recovery utility program on Unix, called "fsck", is going to traverse the entire directory structure of the disk to insure that all free blocks are in the free list.
Recovery after a crash follows these steps:
  1. Allocate a temporary bit map, initialized to indicate that all disk blocks are free.
  2. Start at the inode for the root directory.
  3. Traverse the directory:
    • For each disk data block in the directory file, marks its blocks as "allocated" in the bit map.
    • For each data file in this directory, marks its data blocks as "allocated" in the bit map.
    • For each directory in this directory, perform the "Traverse the directory" steps above.
At the completion of the algorithm, you can compare the actual bit map to the temporary one to find blocks that were allocated, but never made it into a file.

Windows File System Crash Recovery

NTFS assures that the file system will remain consistent by use of a write log. This technique is similar to that used in a database system.
As in other file systems, consistency means that a write (or group of writes) to a file either complete or do not happen at all. It is not possible for a data block to be in an undefined state (e.g., allocated, but not written).
  • The log is one of those standard files stored at the beginning of the MFT. It is called, cleverly enough, the log file.
  • A simplified version of the steps to write data to a file look like:
    • A file update is written to the in-memory log buffer.
    • Updates to the in-memory file data and associated file system structures are made.
    • The log changes are flushed to disk.
    • The file data and structure changes are flushed to disk.
  • If the system crashes during a file update, it is sufficient to go through the log an re-do each operation specified in the log.
  • The system occasionally creates checkpoints, so that it does not have to back to the beginning of the log for recovery. Checkpoints have two main benefits:
  • Log files can be truncated, reducing the space needed for the log.
  • Recovery time is faster if fewer log records need to be processed.

Disk Scheduling

Disk scheduling: in a system with many processes running, it can often be the case that there are several disk I/O's requested at the same time. The order in which the requests are serviced may have a strong effect on the overall performance of the disk.
First come first served (FIFO, FCFS): may result in a lot of unnecessary disk arm motion under heavy loads.
graphics14.png
Shortest seek time first (SSTF): handle nearest request first. This can reduce arm movement and result in greater overall disk efficiency, but some requests may have to wait a long time.
graphics15.png
Scan: like an elevator. Move arm back and forth, handling requests as they are passed. This algorithm does not get hung up in any one place for very long. It works well under heavy load, but not as well in the middle (about 1/2 the time it will not get the shortest seek).
graphics16.png
Minor variant: C-SCAN, which goes all the way back to front of disk when it hits the end, sort of like a raster scan in a display.
LOOK algorithm. Like scan, but reverse direction when hit the last request in the current direction. C-LOOK is the circular variant of LOOK. What most systems use in practice.

Content actions

GIVE FEEDBACK:

No comments:

Post a Comment