Managing Free Memory

Managing Free Memory

When memory is allocated dynamically, the operating system must manage it. Generally, there are two methods to keep track of memory usage: bitmaps and free lists. In this section and the next one we will study these two methods.

Memory Management with Bitmaps

With a bitmap, memory is divided into allocation units as small as a few words and as large as numerous kilobytes. Corresponding to each allocation unit is a bit in the bitmap, which is 0 if the unit is free and 1 if it is occupied (or vice versa). Figure 1 illustrates part of memory and the corresponding bitmap.

A part of memory with five processes and three holes

The size of the allocation unit is an important design issue. The smaller the allocation unit, the larger the bitmap. Nevertheless, even with an allocation unit as small as 4 bytes, 32 bits of memory will require only 1 bit of the map. A memory of 32n bits will use n map bits, so the bitmap will take up only 1/33 of memory. If the allocation unit is chosen large, the bitmap will be smaller, but appreciable memory may be wasted in the last unit of the process if the process size is not an exact multiple of the allocation unit.

A bitmap provides a simple way to keep track of memory words in a fixed amount of memory because the size of the bitmap depends only on the size of memory and the size of the allocation unit. The main difficulty is that when it has been decided to bring a k unit process into memory, the memory manager must search the bitmap to find a run of k consecutive 0 bits in the map. Searching a bitmap for a run of a given length is a slow operation (because the run may straddle word boundaries in the map); this is an argument against bitmaps.

Memory Management with Linked Lists

A different way of keeping track of memory is to maintain a linked list of assigned and free memory segments, where a segment either includes a process or is an empty hole between two processes. The memory of Figure 1(a) is represented in Figure 1(c) as a linked list of segments. Each entry in the list specifies a hole (H) or process (P), the address at which it starts, the length, and a pointer to the next entry.

In this example, the segment list is kept sorted by address. Sorting this way has the benefit that when a process ends or is swapped out, updating the list is straightforward. A terminating process usually has two neighbors (except when it is at the very top or bottom of memory). These may be either processes or holes, leading to the four combinations of Figure 2. In Figure 2(a) updating the list requires replacing a P by an H. In Figure 2(b) and Figure 2(c), two entries are combined into one, and the list becomes one entry shorter. In Figure 2(d), three entries are combined and two items are removed from the list.

Since the process table slot for the terminating process will usually point to the list entry for the process itself, it may be more convenient to have the list as a double-linked list, rather than the single-linked list of Figure 1( c). This structure makes it easier to find the previous entry and to see if a merge is possible.

Four neighbor combinations for the terminating process, X

When the processes and holes are kept on a list sorted by address, various algorithms can be used to assign memory for a created process (or an existing process being swapped in from disk). We suppose that the memory manager knows how much memory to assign. The simplest algorithm is first fit. The memory manager scans along the list of segments until it finds a hole that is big enough. The hole is then broken up into two pieces, one for the process and one for the unused memory, except in the statistically unlikely case of an exact fit. First fit is a fast algorithm because it searches as little as possible.

A slight variation of first fit is next fit. It works the same way as first fit, except that it keeps track of where it is whenever it finds an appropriate hole. The next time it is called to find a hole, it starts searching the list from the place where it left off last time, instead of always at the beginning, as first fit does. Simulations by Bays (1977) show that next fit gives a little worse performance than first fit.

Another well-known and extensively used algorithm is best fit. Best fit searches the whole list, from beginning to end, and takes the smallest hole that is sufficient. Rather than breaking up a big hole that might be required later, best fit tries to find a hole that is close to the actual size required, to best match the request and the available holes.

As an instance of first fit and best fit, look at Figure 1 again. If a block of size 2 is required, first fit will assign the hole at 5, but best fit will assign the hole at 18.

Best fit is slower than first fit because it must search the whole list every time it is called. Somewhat surprisingly, it also results in more wasted memory than first fit or next fit because it tends to fill up memory with tiny, useless holes. First fit generates larger holes on the average.

To get around the problem of breaking up nearly exact matches into a process and a tiny hole, one could think about worst fit, that is, always take the largest available hole, so that the new hole will be big enough to be useful. Simulation has shown that worst fit is not a very good idea either.

All four algorithms can be speeded up by maintaining separate lists for processes and holes. Thus, all of them devote their full energy to inspecting holes, not processes. The inevitable price that is paid for this speedup on allocation is the additional complexity and slowdown when deallocating memory, since a freed segment has to be removed from the process list and inserted into the hole list.

If distinct lists are maintained for processes and holes, the hole list may be kept sorted on size, to make best fit faster. When best fit searches a list of holes from smallest to largest, as soon as it finds a hole that fits, it knows that the hole is the smallest one that will do the job, hence the best fit. No further searching is required, as it is with the single list scheme. With a hole list sorted by size, first fit and best fit are equally fast, and next fit is pointless.

When the holes are kept on separate lists from the processes, a small optimization is possible. Instead of having a separate set of data  structures for maintaining the hole list, as is done in Figure 1( c), the information can be stored in the holes. The first word of each hole could be the hole size, and the second word a pointer to the following entry. The nodes of the list of Figure 1(c), which require three words and one bit (P/H), are no more required.

Yet another allocation algorithm is quick fit, which maintains separate lists for some of the more common sizes requested. For instance, it might have a table with n entries, in which the first entry is a pointer to the head of a list of 4-KB holes, the second entry is a pointer to a list of 8-KB holes, the third entry a pointer to 12-KB holes, and so on. Holes of, say, 21 KB, could be put on either the 20-KB list or on a special list of odd-sized holes.

With quick fit, finding a hole of the required size is extremely fast, but it has the same disadvantage as all schemes that sort by hole size,  namely, when a process ends or is swapped out, finding its neighbors to see if a merge is possible is expensive. If merging is not done, memory will quickly fragment into a large number of small holes into which no processes fit.


memory, algorithm, segment list, processes