The virtual memory system distributed with Berkeley UNIX has served its design goals admirably well over the ten years of its existence. However the relentless advance of technology has begun to render it obsolete. This section of the paper describes the current design, points out the current technological trends, and attempts to define the new design considerations that should be taken into account in a new virtual memory design.
All Berkeley Software Distributions through 4.3BSD have used the same virtual memory design. All processes, whether active or sleeping, have some amount of virtual address space associated with them. This virtual address space is the combination of the amount of address space with which they initially started plus any stack or heap expansions that they have made. All requests for address space are allocated from available swap space at the time that they are first made; if there is insufficient swap space left to honor the allocation, the system call requesting the address space fails synchronously. Thus, the limit to available virtual memory is established by the amount of swap space allocated to the system.
Memory pages are used in a sort of shell game to contain the contents of recently accessed locations. As a process first references a location a new page is allocated and filled either with initialized data or zeros (for new stack and break pages). As the supply of free pages begins to run out, dirty pages are pushed to the previously allocated swap space so that they can be reused to contain newly faulted pages. If a previously accessed page that has been pushed to swap is once again used, a free page is reallocated and filled from the swap area [Babaoglu79], [Someren84].
The design criteria for the current virtual memory implementation were made in 1979. At that time the cost of memory was about a thousand times greater per byte than magnetic disks. Most machines were used as centralized time sharing machines. These machines had far more disk storage than they had memory and given the cost tradeoff between memory and disk storage, wanted to make maximal use of the memory even at the cost of wasting some of the disk space or generating extra disk I/O.
The primary motivation for virtual memory was to allow the system to run individual programs whose address space exceeded the memory capacity of the machine. Thus the virtual memory capability allowed programs to be run that could not have been run on a swap based system. Equally important in the large central timesharing environment was the ability to allow the sum of the memory requirements of all active processes to exceed the amount of physical memory on the machine. The expected mode of operation for which the system was tuned was to have the sum of active virtual memory be one and a half to two times the physical memory on the machine.
At the time that the virtual memory system was designed, most machines ran with little or no networking. All the file systems were contained on disks that were directly connected to the machine. Similarly all the disk space devoted to swap space was also directly connected. Thus the speed and latency with which file systems could be accessed were roughly equivalent to the speed and latency with which swap space could be accessed. Given the high cost of memory there was little incentive to have the kernel keep track of the contents of the swap area once a process exited since it could almost as easily and quickly be reread from the file system.
In the ten years since the current virtual memory system was designed, many technological advances have occurred. One effect of the technological revolution is that the micro-processor has become powerful enough to allow users to have their own personal workstations. Thus the computing environment is moving away from a purely centralized time sharing model to an environment in which users have a computer on their desk. This workstation is linked through a network to a centralized pool of machines that provide filing, computing, and spooling services. The workstations tend to have a large quantity of memory, but little or no disk space. Because users do not want to be bothered with backing up their disks, and because of the difficulty of having a centralized administration backing up hundreds of small disks, these local disks are typically used only for temporary storage and as swap space. Long term storage is managed by the central file server.
Another major technical advance has been in all levels of storage capacity. In the last ten years we have experienced a factor of four decrease in the cost per byte of disk storage. In this same period of time the cost per byte of memory has dropped by a factor of a hundred! Thus the cost per byte of memory compared to the cost per byte of disk is approaching a difference of only about a factor of ten. The effect of this change is that the way in which a machine is used is beginning to change dramatically. As the amount of physical memory on machines increases and the number of users per machine decreases, the expected mode of operation is changing from that of supporting more active virtual memory than physical memory to that of having a surplus of memory that can be used for other purposes.
Because many machines will have more physical memory than they do swap space (with diskless workstations as an extreme example!), it is no longer reasonable to limit the maximum virtual memory to the amount of swap space as is done in the current design. Consequently, the new design will allow the maximum virtual memory to be the sum of physical memory plus swap space. For machines with no swap space, the maximum virtual memory will be governed by the amount of physical memory.
Another effect of the current technology is that the latency and overhead associated with accessing the file system is considerably higher since the access must be over the network rather than to a locally-attached disk. One use of the surplus memory would be to maintain a cache of recently used files; repeated uses of these files would require at most a verification from the file server that the data was up to date. Under the current design, file caching is done by the buffer pool, while the free memory is maintained in a separate pool. The new design should have only a single memory pool so that any free memory can be used to cache recently accessed files.
Another portion of the memory will be used to keep track of the contents of the blocks on any locally-attached swap space analogously to the way that memory pages are handled. Thus inactive swap blocks can also be used to cache less-recently-used file data. Since the swap disk is locally attached, it can be much more quickly accessed than a remotely located file system. This design allows the user to simply allocate their entire local disk to swap space, thus allowing the system to decide what files should be cached to maximize its usefulness. This design has two major benefits. It relieves the user of deciding what files should be kept in a small local file system. It also insures that all modified files are migrated back to the file server in a timely fashion, thus eliminating the need to dump the local disk or push the files manually.