The introduction of the transistor in the mid-1 950s changed the picture drastically. Computers became reliable enough that they could be manufactured and sold to paying customers with the hope that they would continue to function long enough to get some positive work done. For the first time, there was a clear partition between designers, builders, operators, programmers, and maintenance workers.
These machines, now called mainframes, were locked away in especially air-conditioned computer rooms, with team of professional operators to run them. Only big companies or major government agencies or universities could afford the multimillion-dollar price tag. To run a job (i.e., a program or set of programs), a programmer would first write the program on paper (in FORTRAN or assembler), then punch it on cards. He would then bring the card deck down to the input room and hand it to one of the operators and go drink coffee until the output was ready.
When the computer completed whatever job it was currently running, an operator would go over to the printer and tear off the output and carry it over to the output room, so that the programmer could collect it later. Then he would take one of the card decks that had been brought from the input room and read it in. If the FORTRAN compiler was needed, the operator would have to get it from a file cabinet and read it in. Much computer time was wasted while operators were walking around the machine room.
Given the high cost of the equipment, it is not amazing that people rapidly looked for ways to reduce the wasted time. The solution usually adopted was the batch system. The thought behind it was to collect a tray full of jobs in the input room and then read them onto a magnetic tape using a small (comparatively) inexpensive computer, such as the IBM 1401, which was quite good at reading cards, copying tapes, and printing output, but not at all good at numerical calculations. Other, much more expensive machines, such as the IBM 7094, were used for the real computing. This situation is shown in following figure.
After about an hour of collecting a batch of jobs, the cards were read onto a magnetic tape, which was carried into the machine room, where it was mounted on a tape drive. The operator then loaded a special program (the ancestor of today's operating system), which read the first job from tape and ran it. The output was written onto a second tape, instead of being printed. After each job completed, the operating system automatically read the next job from the tape and began running it. When the whole batch was done, the operator removed the input and output tapes, replaced the input tape with the next batch, and brought the output tape to a 1401 for printing off line (i.e., not connected to the main computer).
The structure of a typical input job is shown in following figure. It started out with a $JOB card, specifying the maximum run time in minutes, the account number to be charged, and the programmer' s name. Then came a $FORTRAN card, telling the operating system to load the FORTRAN compiler from the system tape. It was directly followed by the program to be compiled, and then a $LOAD card, directing the operating system to load the object program just compiled. (Compiled programs were often written on scratch tapes and had to be loaded openly.) Next came the $RUN card, telling the operating system to run the program with the data following it. Finally, the $END card marked the end of the job. These primal control cards were the ancestors of modern shells and command-line interpreters.
Large second-generation computers were used regularly for scientific and engineering calculations, such as solving the partial differential equations that often occur in physics and engineering. They were largely programmed in FORTRAN and assembly language. Typical operating systems were FMS (the Fortran Monitor System) and IBSYS, IBM's operating system for the 7094.
Tagsoperating system, mainframes, batch system
- Error Handling
- Disk Arm Scheduling Algorithms
- User-Space I/O Software
- Device-Independent I/O Software
- Device Drivers
- I/O SOFTWARE LAYERS
- PRINCIPLES OF I/O SOFTWARE
- Memory-Mapped I/O
- EXAMPLE FILE SYSTEMS
- Defragmenting Disks
- File System Backups
- Implementing Directories
- An Example Program Using File System Calls
- IMPLEMENTATION ISSUES
- Shared Libraries / Mapped Files
- Page Size
- The Least Recently Used (LRU) Page Replacement Algorithm
- The Not Recently Used Page Replacement Algorithm
- PAGE REPLACEMENT ALGORITHMS
- RESEARCH ON PROCESSES AND THREADS
- Process States
- Process Creation
- METRIC UNITS / SUMMARY
- RESEARCH ON OPERATING SYSTEMS
- Large Programming Projects
- Header Files
- Client-Server Model / Virtual Machines
- Virtual Memory
- Hard Disks
- Large Memories
- Real-Time Operating Systems
- Handheld Computer Operating Systems
- THE OPERATING SYSTEM ZOO
- Booting the Computer
- COMPUTER HARDWARE REVIEW
- The Fourth Generation (1980-Present) Personal Computers
- The Third Generation (1965-1980) ICs and Multiprogramming
- The First Generation (1945-55) Vacuum Tubes
- HISTORY OF OPERATING SYSTEMS
- The Operating System as a Resource Manager
- The Operating System as an Extended Machine
- WHAT IS AN OPERATING SYSTEM?
- Introduction To Operating System