When to Schedule

When to Schedule

A major issue related to scheduling is when to make scheduling decisions. It turns out that there are a variety of situations in which scheduling is required. First, when a new process is created, a decision needs to be made whether to run the parent process or the child process. Since both processes are in ready state, it is an ordinary scheduling decision and can go either way, that is, the scheduler can legitimately choose to run either the parent or the child next.

Second, a scheduling decision must be made when a process exits. That process can no longer run (since it no longer exists), so some other process must be chosen from the set of ready processes. If no process is ready, a system-supplied idle process is normally run.

Third, when a process blocks on I/O, on a semaphore, or for some other reason, another process has to be selected to run. Sometimes the reason for blocking may play a role in the choice. For instance, if A is an important process and it is waiting for B to exit its critical region, letting B run next will allow it to exit its critical region and thus let A continue. The trouble, however, is that the scheduler usually does not have the necessary information to take this dependency into account.

Fourth, when an I/O interrupt takes place, a scheduling decision may be made. If the interrupt came from an I/O device that has now completed its work, some process that was blocked waiting for the I/O may now be ready to run. It is up to the scheduler to decide whether to run the newly ready process, the process that was running at the time of the interrupt, or some third process.

If a hardware clock provides periodic interrupts at 50 or 60 Hz or some other frequency, a scheduling decision can be made at each clock interrupt or at every k-th clock interrupt. Scheduling algorithms can be divided into two categories regarding how they deal with clock interrupts. A nonpreemptive scheduling algorithm picks a process to run and then just lets it run until it blocks (either on I/O or waiting for another process) or until it voluntarily releases the CPU. Even if it runs for hours, it will not be forcibly suspended. In effect, no scheduling decisions are made during clock interrupts. After clock interrupt processing has been finished, the process that was running before the interrupt is resumed, unless a higher-priority process was waiting for a now-satisfied timeout.

On the contrary, a preemptive scheduling algorithm picks a process and lets it run for a maximum of some fixed time. If it is still running at the end of the time interval, it is suspended and the scheduler picks another process to run (if one is available). Doing preemptive scheduling requires having a clock interrupt take place at the end of the time interval to give control of the CPU back to the scheduler. If no clock is available, nonpreemptive scheduling is the only option.

Categories of Scheduling Algorithms

Not amazingly, in different environments different scheduling algorithms are required. This situation happens because different application areas (and different kinds of operating systems) have different objectives. In other words, what the scheduler should optimize for is not the same in all systems. Three environments worth distinguishing are

1. Batch.
2. Interactive.
3. Real time.

Batch systems are still in extensive use in the business world for doing payroll, inventory, accounts receivable, accounts payable, interest  calculation (at banks), claims processing (at insurance companies), and other periodic tasks. In batch systems, there are no users anxiously waiting at their terminals for a quick response to a short request. As a result, nonpreemptive algorithms, or preemptive algorithms with long time periods for each process, are frequently acceptable. This approach reduces process switches and hence improves performance. The batch algorithms are in fact fairly general and often applicable to other situations as well, which makes them worth studying, even for people not involved in corporate mainframe computing.

In an environment with interactive users, preemption is necessary to keep one process from hogging the CPU and denying service to the others. Even if no process intentionally ran forever, one process might shut out all the others indefinitely due to a program bug. Preemption is required to prevent this behavior. Servers also fall into this category, since they usually serve multiple (remote) users, all of whom are in a big hurry.

In systems with real-time restrictions, preemption is, oddly enough, sometimes not required because the processes know that they may not run for long periods of time and generally do their work and block quickly. The difference with interactive systems is that real-time systems run only programs that are intended to further the application at hand. Interactive systems are general purpose and may run arbitrary programs that are not cooperative or even malicious.


process, semaphore, scheduler, preemptive scheduling