Cleaning Policy / Virtual Memory Interface

Cleaning Policy / Virtual Memory Interface

Cleaning Policy

Paging works best when there are lots of free page frames that can be claimed as page faults take place. If every page frame is full, and moreover customized, before a new page can be brought in, an old page must first be written to disk. To make sure a plentiful supply of free page frames, many paging systems have a background process, called the paging daemon, that sleeps most of the time but is awakened from time to time to inspect the state of memory. If too few page frames are free, the paging daemon begins selecting pages to expel using some page replacement algorithm. If these pages have been customized since being loaded, they are written to disk.

In any event, the preceding contents of the page are remembered. In the event one of the evicted pages is required again before its frame has been overwritten, it can be reclaimed by removing it from the pool of free page frames. Keeping a supply of page frames around yields better performance than using all of memory and then trying to find a frame at the moment it is required. At the very least, the paging daemon makes sure that all the free frames are clean, so they need not be written to disk in a big hurry when they are needed.

One method to implement this cleaning policy is with a two-handed clock. The front hand is controlled by the paging daemon. When it points to a dirty page, that page is written back to disk and the front hand is advanced. When it points to a clean page, it is just advanced. The back hand is used for page replacement, as in the standard clock algorithm. Only now, the chance of the back hand hitting a clean page is increased due to the work of the paging daemon.

Virtual Memory Interface

So far our whole discussion has assumed that virtual memory is transparent to processes and programmers, that is, all they see is a large virtual address space on a computer with a small(er) physical memory. With many systems, that is true, but in some advanced systems, programmers have some control over the memory map and can use it in nontraditional ways to enhance program behavior. In this section, we will briefly consider a few of these.

One reason for giving programmers control over their memory map is to allow two or more processes to share the same memory. If programmers can name regions of their memory, it may be possible for one process to give another process the name of a memory region so that process can also map it in. With two (or more) processes sharing the same pages, high bandwidth sharing becomes possible - one process writes into the shared memory and another one reads from it.

Sharing of pages can also be used to implement a high-performance message-passing system. Usually, when messages are passed, the data are copied from one address space to another, at considerable cost. If processes can control their page map, a message can be passed by having the sending process unmap the page(s) containing the message, and the receiving process mapping them in. Here only the page names have to be copied, instead of all the data.

Yet another advanced memory management technique is distributed shared memory (Feeley et al., 1995; Li, 1986; Li and Hudak, 1989; and Zekauskas et al., 1994). The idea here is to allow multiple processes over a network to share a set of pages, possibly, but not necessarily, as a single shared linear address space. When a process references a page that is not currently mapped in, it gets a page
fault. The page fault handler, which may be in the kernel or in user space, then locates the machine holding the page and sends it a message asking it to unmap the page and send it over the network. When the page arrives, it is mapped in and the faulting instruction is restarted.


page faults, page frames, paging daemon, virtual memory, memory map, address space