A Brief History of Small Computers

"History is made by people; it is made up of the events and inventions of people. The history of computing is especially affected by the visions of a few people. These people believe strongly in the significant technological changes that are possible. They strive to understand how computational and communications power can be made accessible to people in scientific laboratories, professional offices, hospitals, schools, and homes."

A History of Personal Workstations, Adele Goldberg, ACM Press, 1998.

There are many things which people have called "computers"; there is no one fixed definition of what constitutes a computer. Computers have long been used by man to assist in mathematical operations. Although history books will reveal a wide range of mechanical computing devices (based on pulleys, levers, cams, etc), most people now think of computers as electronic devices. We can define a computer as:

"A computer is a machine which can accept data, process the data and supply the results. The term is used for any computing device that operates according to a stored program."

The practical development of the electronic computer started in the UK in the 1940's with the development of a code-breaking computer (Colossus) which was designed to break the Enigma code used by the German forces. Interestingly, the Enigma code was extremely code generated by the electro-mechanical operations of a type-writer like machine, but required a programmable electronic machine (using 1500 electronic vacuum valves) to search through the very large combinations of code words to crack the code based on statistical techniques. The technology was so secret, that immediately following the close of the second world war, the computer was quite literally broken up and the expertise scattered. A museum at Bletchley Park (a Victorian Mansion in the South of England) is now attempting to reconstruct Colossus.

It was not until the early 50's that the electronic computer appeared for commercial applications. A notable early "commercial" system was the Manchester Mk I (developed at Manchester University in the late 1940's). The first computers were huge, filling a whole floor of a large building and fairly unreliable, prone to mechanical failures, electrical problems and programming errors. They could operate on one program at a time, and the way in which the program was designed was very much linked to the "hardware" or design of the circuitry. Programs (also known as "software") consisted of a series of "machine code" instructions stored in binary (base 2) (sometimes written in octal (base 8) or hexadecimal (base 16)). Programs were not usually very big, consisting of 100's - 1000's of separate instructions and were very simple compared to modern programs.

To allow more rapid development of programs, and specifically to allow large programs to be written by groups of programmers, programs began to be written which helped people to write the machine code. The most useful of these programs was the "assembler" which allowed the programmer to write each machine code instruction as a line of alpha-numeric characters (i.e. using a type-writer style keyboard). Each line usually represented one machine code instruction, but instead of writing the value of the instruction, a mnemonic (which was easier to remember) was used. People could use "labels" to identify the location of a "variable" piece of data in computer memory. Assemblers also automated various mundane tasks, such as working out which memory location should be used to store the various parts of the program, and how to organise the program so that commonly required tasks could be performed by "procedures" which were written once, but executed many times. Gradually the task of programming moved away from the field of mathematics and the job of the "computer programmer" emerged.

The core of a computer began to be known as the Central Processing Unit (CPU). It contained the circuitry to "fetch" each machine code instruction from memory and execute it (this operation has begun to be known as "dispatch'). Parts of the CPU also handled mathematical operations on integers (such as shifting, adding, multiplication, division, etc). Another part of the CPU allowed the program to "loop" (execute a series of instructions a set number of times) or select alternate instructions (known as "branching") to allow the pattern of execution to change depending on the results of previous calculations. Later floating point instructions were added in the form of a floating point unit, to speed-up the execution of programs manipulating floating-point ("real") numbers.

The CPU contained enough memory (storage registers) to allow the computer to operate, but the program and data were generally too large to fit within the CPU at one time. They were therefore stored in an external memory unit. At first the amount of memory available was very small, and programs had to be written carefully to ensure they fitted in the available memory. As computers developed, the amount of memory increased from 1000's of bytes (KB) to 1,000,000,000's of bytes (GB).

When a large set of data was required (e.g. to store a company payroll list, or a catalogue of components), a secondary level of memory was required - usually using magnetic media. Storing data on magnetic media has the advantage that the data is not (usually) lost when the computer is powered-down. Magnetic disks and tapes continue to be bused to store data in modern computer systems, but magnetic cards and magnetic drums have long-since disappeared. Input to computers was normally performed "off-line" by preparing the data using a tale-printer (or "tele-type") with a paper-tape punch (which looked like a type-writer), or punched cards (which were more common when the same data was to be sued a number of times). Output was to a tele-printer, line-printer, magnetic tape driver, electro-mechanical card punch or 8-hole paper tape. All of these devices were called "peripherals" and were housed in separate rooms at the edge of (or on the "periphery") of the main computer room. The term peripheral continues in computer in modern computers to describe any piece of equipment which is in a separate box to the main computer.

Oblong punched cards were common for entering data to computers until the late 1970's. A punched card was usually organised in 80 columns with 12 punch positions in each column. One column was used to represent each character (letter, number, or symbol). Cards were punched using a machine with a type-writer keyboard. A range of electro-mechanical machines were designed to automate the processing of punch cards. The basic design of these pieces of equipment may be traced to the work of an American, called Hollerith, who introduced a series of machines from 1889 to sort, tabulate, print, etc sets of punched cards. Such machines remained in regular use until the 1950's. Computer card readers used either an array of 12 electrical contacts or optical photo-cells to detect the pattern of holes

Bus connecting CPU, Memory and I/O Controllers

All the components of a computer are linked by parallel sets of wires known as "busses", a shortened form of the word "bus bar" used in electrical engineering. A bus connects the CPU to the memory and also to the peripherals via "controllers". Originally one controller was associated with each peripheral (e.g. a disk controller controlled the hard disk and a terminal controller controlled each attached tele-printer). Sometimes a controller was used to control a peripheral bus to which several peripherals were simultaneously connected (as is common in modern computers). Modern controllers are very sophisticated and often contain microprocessors to optimise performance of the peripherals to which they connect.

Simple CPU architecture

Over the years, partly as a result of growing expertise, but mainly due to the invention of the transistor, reliability and sophistication of computers increased. Computers began to be classified as "Mainframe", housed in large purpose-built computer rooms, and smaller "Mini-computers" consisting of a few racks of equipment (e.g. the PDP-8 of 1964). Computers began to make use of Small-Medium Scale Integrated Circuits, where several transistors were made on the same piece of Silicon creating a chip which performed a "building block" function, such as a set of shift register, bus driver, adder. The main memory in the computer was still magnetic core memory, and remained so until large semiconductor memory chips could be reliably produced. Computers were made by many companies, but IBM (International Business Machines) dominated the world-wide mainframe computer market. se to describe any add-on box to provide a specific hardware task.

As semiconductor chips became more sophisticated and computing power increased, the reliability of computers also improved. Computers became more widely used, and "higher level" programming languages evolved. These languages were not tied to particular models of computer, but instead the same program could be used on any computer which supported the "language". Languages were usually linked to specific problem domains: Fortran for scientific work, Cobol for business applications, Lisp for list processing, Snobol for processing text, Simula for simulation, etc. Some of these languages survive until this day, but for many applications these have been replaced by general purpose languages such as: Algol (now infrequently used), Basic, and C, and more later Pascal and C++. Awide variety of specialist more abstract languages are also used for specific tasks.

Compilation of High-Level code to assembler or executable object code.

It was not until the 1960's that the first graphics displays emerged, so that computer users could see the "output" of their work without having to print it (or save it to magnetic or paper tape for later printing). At first graphics terminals were not widely available and were very expensive. Most programmers continued to use punch cards until the 1970s. The modern personal desktop computer did not emerge, even in computer research laboratories until the 1970s. Pioneering work at Xerox Palo Alto Research Center (PARC), near to San Francisco, USA demonstrated for the first time a powerful combination of components: Bit-mapped graphics displays, the mouse (rather than the keyboard), the local area network (which linked computers to shared printers). For the first time, these people had a complete computer on (or beside) their desk for their own use. Word-processors, hypertext (the fore-runner of the world wide web) and graphical user interfaces all began to be experimented with. Similar work was also being done in other computer research labs, but this was all based on very expensive purpose-built computers.

Advances in electronics manufacture finally lead to the mass production of cheap semiconductor memory and microprocessors (the 4004 in 1972). The annual doubling of the complexity of silicon circuits with respect to time was first predicted by Dr Gordon Moore (then research director of Fairchild ) in 1965, and has become widely accepted in the semiconductor industry.

By 1975, microprocessors (e.g. 8080 and 6502) had been developed which were capable of supporting 4-8 KB of memory. This formed the basis of a new industry, leading to the emergence of home (personal) computers. The Altair was probably the first home computer in 1979 (available in kit form for people to build themselves). It was followed by the PET (Personal Electronic Transactor) mass-manufactured by commodore. Apple introduced its first computer in 1980, followed a few years later by the IBM PC. There followed a wealth of alternate designs. Word-processing and spread-sheets emerged as important business applications. The power of such microprocessors was roughly equivalent to the minicomputers of not so many years previously, but mass production brought the price to a level affordable by individual people.

By the mid 1970s traditional microprocessors had begun to be used in a wide variety of applications beyond those previously associated with computing. Many pieces of equipment used "microcontrollers", a single chip combining most of the features required to build a complete computer. Microcontrollers (ranged from simple devices such as the PIC, to very sophisticated products resembling personal computers, but without the support for computing peripherals). These micro-controllers were usually based on successful microprocessor designs (and generally followed the evolution of the microprocessor, lagging a few years in terms of sophistication). As the processing power and cost fell dramatically in the 1990s they have eventually found their way into most laboratory, consumer and industrial electronic systems.

Date

Main-Frame

Super Computer

Mini-Computer

Workstation

Microprocessor

Personal Computer

 1960  1400  PDP-1
 
 1965 IBM-360  PDP-8  
 1970 IBM-370  Nova / PDP-11  
 1975  Cray-1  PDP-11/70  6800/Z80/6502
 1980  IBM 40XX  Vax  68000 / PC/ Apple II
 1985    Sun3, Apollo  Mac / Laserprinter
 1990   MIPs, Sun4 (Sparc) PowerPC
 1995    Sparc64, Alpha  Lap-Tops
 2000    Sparc Ultra  Pent II/PowerPC

The first work stations appeared in 1981, and filled the gap between the personal computer and the larger corporate computers, eventually replacing the mini-computer. Using a workstation, people had the computing power of a minicomputer available on their desktop. Workstations manufactured by Apollo, SUN, and Xerox had large (19") bit-mapped displays, were networked using a Local Area Network (LAN) and typically had 4-8 MB of main memory. They typically used a UNIX-like operating system. UNIX had been designed in the 1970s as an "open" standard way of allowing a single mini-computer to do many tasks simultaneously by quickly switching from one task to another and back again. This let several people share the use of one (expensive) computer. The workstation turned this around, by letting one person do several tasks at one time using their own computer. A typical workstation in the early 1980's cost £10,000 - £100,000.

There was a key break-through in personal computing 1984, when Apple introduced the Macintosh; the first personal computer based on a solid workstation operating system but integrating bit-mapped graphics display, in-built networking and a user interface using mouse/menus/windows. Apple recognised that the personal computer was becoming powerful enough to support many of the user-interface features previously only available on workstations. With 128 KB of memory and a 68000 microprocessor, the original Mac fitted in a 3/4 cubic foot box. Desktop publishing was possible using the "Laserwriter" - a mass-produced printer utilising photo-copier technology to produce a high quality graphics.

Although there were many personal computers in the 1980s, only two basic designs of home computer have finally survived in appreciable numbers: the Macintosh and the IBM PC. Apple were predominantly the manufacturer of the "Mac", while the PC design became public and a large number of companies competed in the IBM-clone PC market. This competition survives to this day. Over the following decade, most personal computers began to adopt the design of the Macintosh, largely through development of "Windows" by Microsoft. Although superficially similar, the two designs of computer operate in fundamentally different ways.

One further significant development occurred in the early 1990s. Up until this point, there had been a general increase in the number of instructions supported by a central processor. Each generation of CPU would add a few more instructions, usually to better support some part of a high level programming language (e.g. to provide more efficient procedure calls). Two problems emerged: design of the CPU was becoming extremely complex, and it was also becoming difficult to predict which instructions actually provided the most benefit. A revolution in design began to be suggested: the Reduced Instruction Set Computer (RISC). There was a return to the much smaller instruction sets employed by the early minicomputers, but execution of the each instruction was of course much faster. The instruction sets were often not all that different to those of previous generation computers.

At first there was little difference in processing power compared to the Complex Instruction Set Computer (CISC), but gradually RISC processors introduced sophisticated dispatch units which were able to process several instructions at the same time. This coupled with memory management systems (originally developed for mainframes) has allowed the RISC processor to dominate the workstation market. Apple followed the trend, by introducing the Power PC RISC processor to its personal computers. Modern computers use the CISC approach, but most mircocontrollers use a RISC architecture.

By the 1990's, the computer world had become divided into five sectors:

The basic operation of computers has changed little over the past 50 or so years. The details of the design of modern computers has however changed significantly. Techniques originally developed for large main-frame computers (e.g. caching, virtual memory, pipe lining, multi-processing) have now become common in both workstations and PC/Mac computers. The evolution of computers continues. As technology advances, so applications pioneered on workstations and super-computers become available to the majority of computer users with PC/Macs.

Microcontrollers and digital signalling processing price has steadily reduced to the point were low-cost units are comparable in price to other logic ICs and form the basis of the mass market of consumer electronics.

Prototype microcontroller-based DMX Receiver based on ATMEL AVR microcontroller, developed at University of Aberdeen.


See also: