The Third Generation (1965-1980) ICs and Multiprogramming

The Third Generation (1965-1980) ICs and Multiprogramming

By the early 1960s, the majority of computer manufacturers had two different, incompatible, product lines. On the one hand there were the word-oriented, large-scale scientific computers, such as the 7094, which were used for numerical calculations in science and engineering. On the other hand, there were the character-oriented, commercial computers, such as the 1401, which were extensively used for tape sorting and printing by banks and insurance companies.

To develop and maintain two completely different product lines was costly proposal for the manufacturers. Moreover, many new computer customers in the beginning required a small machine but later outgrew it and wanted a bigger machine that would run all their old programs, but faster.

IBM tried to solve both of these problems at a single stroke by introducing the System/360. The 360 was a series of software-compatible machines ranging from 1401-sized to much more powerful than the 7094. The machines were different only in price and performance (maximum memory, processor speed, number of I/0 devices allowed, and so forth). Since all the machines had the same architecture and instruction set, programs written for one machine could run on all the others, at least in theory. In addition, the 360 was designed to handle both scientific (i.e., numerical) and commercial computing. In this way a single family of machines could ,meet the expectations of all customers. In following years, IBM has come out with compatible successors to the 360 line, using more modern technology, known as the 370, 4300, 3080, and 3090. The zSeries is the most recent descendant of this line, although it has differed significantly from the original.

The IBM 360 was the first major computer line to use (small-scale) ICs (Integrated Circuits) thus providing a major price/performance advantage over the second-generation machines, which were built up from individual transistors. It was an instant success, and the idea of a family of compatible computers was soon adopted by all the other main manufacturers. The descendants of these machines are still in use at computer centers today. These days they are often used for managing large databases (e.g., for airline reservation systems) or as servers for World Wide Web sites that must process thousands of requests per second.

The greatest strength of the "one family" thought was at the same time its greatest weakness. The aim was that all software, including the operating system, OS/360 had to work on all models. It had to run on small systems, which often just replaced 1401s for copying cards to tape, and on very large systems, which often replaced 7094s for doing weather forecasting and other heavy computing. It had to be good on systems with few peripherals and on systems with many peripherals. It had to work in commercial environments and in scientific environments. Most of all, it had to be well-organized for all of these different uses.

There was no way that IBM (or anybody else) could write a piece of software to meet all those contradictory needs. The result was massive and amazingly complex operating system, probably two to three orders of magnitude larger than FMS. It consisted of millions of lines of assembly language written by thousands of programmers, and contained thousands upon thousands of bugs, which necessitated a continuous stream of new releases in an attempt to correct them. Each new release fixed some bugs and introduced new ones, so the number of bugs probably remained constant in time.

One of the designers of OS/360, Fred Brooks, afterward wrote a witty and incisive book (Brooks, 1996) telling his experiences with OS/360. While it would be impossible to sum up the book here, suffice it to say that the cover shows a group of prehistoric beasts stuck in a tar pit. The cover of Silberschatz et al. (2005) makes a similar point about operating systems being dinosaurs.

In spite of its huge size and problems, OS/360 and the similar third-generation operating systems produced by other computer manufacturers in fact satisfied most of their customers reasonably well. They also popularized a number of key techniques absent in second-generation operating systems. Maybe the most important of these was multiprogramming. On the 7094, when the current job paused to wait for a tape or other I/O operation to complete, the CPU simply sat idle until the I/O finished. With heavily CPU-bound scientific calculations, I/O is rare, so this wasted time is not important. With commercial data processing, the I/0 wait time can often be 80 or 90 percent of the total time, so something had to be done to keep away from having the (expensive) CPU be idle so much.

The solution that developed was to partition memory into various pieces, with a different job in each partition, as shown in the following figure. While one job was waiting for I/O to complete, another job could be using the CPU. If enough jobs could be held in main memory at once, the CPU could be kept busy nearly 100 percent of the time. Having numerous jobs safely in memory at once needs special hardware to protect each job against snooping and harm by the other ones, but the 360 and other third-generation systems were prepared with this hardware.

the third generation (1965-1980) ics and multiprogramming

One more main feature present in third-generation operating systems was the capability to read jobs from cards onto the disk as soon as they were brought to the computer room. Then, whenever a running job finished, the operating system could load a new job from the disk into the now-empty partition and run it. This method is called spooling (from concurrent Peripheral Operation On Line) and was also used for output. With spooling, the 1401s were no longer required, and much carrying of tapes departed.

Though third-generation operating systems were well suited for big scientific calculations and huge commercial data processing runs, they were still basically batch systems. Many programmers pined for the first-generation days when they had the machine all to themselves for a few hours, so they could debug their programs promptly. With third-generation systems, the time between submitting a job and getting back the output was often several hours, so a single misplaced comma could cause a compilation to fail, and the programmer to waste half a day.

This wish for quick reply time paved the way for timesharing, an alternative of multiprogramming, in which each user has an online terminal. In a timesharing system, if 20 users are logged in and 17 of them are thinking or talking or drinking coffee, the CPU can be allocated in turn to the three jobs that want service. Since people debugging programs usually issue short commands (e.g., compile a five-page procedure) rather than long ones (e.g., sort a million-record file), the computer can provide fast, interactive service to a number of users and perhaps also work on big batch jobs in the background when the CPU is otherwise idle. The first general-purpose timesharing system, CTSS (Compatible Time Sharing System), was developed at M.I.T. on a specially modified 7094 (Corbat6 et al., 1962). Though, timesharing did not really become popular until the essential protection hardware became widespread during the third generation.

After the success of the CTSS system, M.I.T., Bell Labs, and General Electric (then a major computer manufacturer) decided to embark on the development of a "computer utility", a machine that would support some hundreds of concurrent timesharing users. Their model was the electricity system, when you need electric power, you just stick a plug in the wall, and within reason, as much power as you require will be there. The designers of this system, known as MULTICS (MULTiplexed Information and Computing Service), imagined one huge machine providing computing power for everyone in the Boston area. The thought that machines 10,000 times faster than their GE-645 mainframe would be sold (for well under $1000) by the millions only 40 years later was pure science fiction. Sort of like the idea of supersonic trans-Atlantic undersea trains now.

MULTICS was a mixed achievement. It was planned to support hundreds of users on a machine only a little more powerful than an Intel 386-based PC, though it had much more I/0 capacity. This is not quite as crazy as it sounds, since people knew how to write small, efficient programs in those days, a skill that has afterward been lost. There were many reasons that MULTICS did not take over the world, not the least of which is that it was written in PL/1, and the PL/1 compiler was years late and hardly worked at all when it finally arrived. In addition, MDLTICS was very much determined for its time, much like Charles Babbage's analytical engine in the nineteenth century.

To make a long story short, MULTICS introduced many decisive ideas into the computer literature, but turning it into a serious product and a major commercial success was a lot harder than anyone had expected. Bell Labs dropped out of the project, and General Electric quit the computer business altogether. However, M.I.T. carried on and finally got MULTICS working. It was eventually sold as a commercial product by the company that bought GE's computer business (Honeywell) and installed by about 80 main companies and universities worldwide. While their numbers were small, MULTICS users were fiercely loyal. General Motors, Ford, and the U.S. National Security Agency, for instance, only shut down their MULTICS systems in the late 1990s, 30 years after MULTICS was released, after years of trying to get Honeywell to update the hardware.

Nowadays, the idea of a computer utility has fizzled out, but it may well come back in the form of huge centralized Internet servers to which comparatively dumb user machines are attached, with most of the work happening on the big servers. The incentive here is likely to be that most people do not want to manage an increasingly complex and fastidious computer system and would prefer to have that work done by a team of professionals working for the company running the server. E-commerce is already developing in this direction, with several companies running e-malls on multiprocessor servers to which simple client machines connect, very much in the spirit of the MUL TICS design.

Regardless of its lack of commercial success, MUL TICS had a massive influence on succeeding operating systems. It is described in numerous papers and a book (Corbat6 et al., 1972; Corbat6 and Vyssotsky, 1965; Daley and Dennis, 1968; Organick, 1972; and Saltzer, 1974). It also had (and still has) an active Website, located at www.multicians.org, with a great deal of information about the system, its designers, and its users.

Another main improvement during the third generation was the phenomenal growth of minicomputers, starting with the DEC PDP- 1 in 1961 . The PDP-1 had only 4K of 18-bit words, but at $ 1 20,000 per machine (less than 5 percent of the price of a 7094), it sold like hotcakes. For certain kinds of nonnumerical work, it was almost as fast as the 7094 and gave birth to a whole new industry. It was rapidly followed by a series of other PDPs (unlike IBM's family, all incompatible) culminating in the PDP- 11 .

One of the computer scientists at Bell Labs who had worked on the MDLTICS project, Ken Thompson, afterward found a small PDP-7 minicomputer that no one was using and set out to write a stripped-down, one-user version of MULTICS. This work later developed into the UNIX operating system, which became popular in the academic world, with government agencies, and with many companies.

The history of UNIX has been told elsewhere (e.g., Salus, 1994). Part of that story will be given in CASE STUDY 1: LINUX. For now, it is sufficient it to say, that because the source code was broadly available, many organizations developed their own (incompatible) versions, which led to chaos. Two major versions developed, System V, from AT&T, and BSD (Berkeley Software Distribution) from the University of California at Berkeley. These had small variants as well. To make it possible to write programs that could run on any UNIX system, IEEE developed a standard for UNIX, called POSIX, that most versions of UNIX now support. POSIX describes a minimal system call interface that conformant UNIX systems must support. In reality, some other operating systems now also support the POSIX interface.

As an aside, it is important to mention that in 1987, the author released a small clone of UNIX, called MINIX, for educational purposes. Functionally, MINIX is very similar to UNIX, including POSIX support. Since that time, the original version has developed into MINIX 3, which is highly modular and focused on very high reliability. It has the capability to detect and replace faulty or even crashed modules (such as I/O device drivers) on the fly without a reboot and without disturbing running programs. A book describing its internal operation and listing the source code in an appendix is also available (fanenbaum and Woodhull, 2006). The MINIX 3 system is available for free (including all the source code) over the Internet at www.mini.x3.org.

The need for a free production (as opposed to educational) version of MINIX led a Finnish student, Linus Torvalds, to write Linux. This system was directly inspired by and developed on MINIX and originally supported various MINIX features (e.g., the MINIX file system). It has since been extended in many ways but still retains some of underlying structure common to MINIX and to UNIX. Readers interested in a detailed history of Linux and the open source movement might want to read Glyn Moody's (2001) book. Most of what will be said about UNIX in this blog thus applies to System V, MINIX, Linux, and other versions and clones of UNIX as well.


Tags

operating system, multiprogramming, unix