A journey through the history of video games and interactive digital entertainment.

Archive for the ‘Prehistory’ Category

1940: Nimatron

The Nimatron is an early relay-based computer game machine, although its impact on later developments in digital computers and computer games was, at best, negligible. It was devised by nuclear scientist Dr. Edward Uhler Condon in the winter of 1939/1940, and realised with assitance from Gerald L. Tawney and Willard A. Derr.

Condon had been appointed the Associate Director of Research at Westinghouse Electric in 1937. His plan to realise the game Nim in circuits came from the realisation that the same scaling circuits used by Geiger counters could be used to represent the numbers defining the game’s state (for more information about the game Nim, see the next article). The project was supposed to liven up Westinghouse’s public exhibition during the second year of the 1939-1940 New York World Fair. (The game was later on exhibit at the Buell Planetarium in Pittsburgh.)

The game was housed in a large cabinet with an elevated box, on all four sides of which a series of lightbanks displayed the current game state to the audience. Patrons were challenged to beat the machine, using buttons to choose how many lights to extinguish from which lightbank (equivalent to picking up matches in traditional Nim) before giving the machine its turn. Condon recollected that about 50,000 people might have played the game during the 6 months it was on display, although few ever beat it.

A very interesting side-note is that the machine had a built-in delay before it made its moves. It was thought that human players, who had to think before making their next move, would feel embarrassed if the machine in turn took only a fraction of a second for its own decisions. The machine was made to look like it had to think about its move for a few seconds, in order to not insult its human opponents. Condon figured that this might have been the first deliberate slowing-down of a computer.

Since simple strategies allow playing a perfect game of Nim, it was also decided to have the machine play only a certain number of predetermined games, so that there would be some chance of beating it. Players who were adamant that the machine could not be beaten were proven wrong by operators’ demonstrations, who had learned the games by heart.

Despite its success as a public attraction, Condon considered the machine the biggest failure of his career—because he did not realise the underlying potential. The patent filed included a description of the internal representation of numbers, a concept which proved to be universally important in the computer revolution that was just around the corner. Since the Nimatron was only ever built as an amusement device, neither Condon nor other Westinghouse managers realised what they had on their hands. The machine, now missing and probably long disassembled, was almost completely forgotten.

Next page: 1941 – Nim machine
« Return to Part I — Prehistory

References

1941: Nim machine

Only rediscovered quite recently, a device designed by Dr. Raymond M. Redheffer at M.I.T. in 1941 and improved upon in 1942 could very well be the first instance of interactive electronic entertainment ever devised. The machine, to be publicly demonstrated not before 1948, played the traditional game of Nim perfectly against a human player.

Nim is a very simple combinatorial game that was the focus of several early experiments related to computer and video games (see also the earlier Nimatron,) because an automatic opponent can be so easily implemented. At the beginning of a round there is a certain number of objects—usually matches—from which players take turns to remove a certain number. Depending on the rules played, the object is either to be the player to remove the last object, or force the other player to do so (i.e., leaving only one match on the table for the other player’s turn). Commonly played setups include the objects being divided amongst 3 piles and players taking as many objects as they want from one single pile per turn; or all objects (usually 21) being on a single pile with players taking between 1 and 3 per turn.

Once understood, the game’s predictability means that it is easy to play a perfect strategy by always striving for “safe positions”. Given a specific initial setup and both players knowing this strategy, the matter of which player will win is simply a question of which player is allowed to take the first turn. While this simplicity led to Nim being the first game that humans were able to play against an automatic device, it also means that it loses its charm quite quickly. A player either knows the perfect strategy, in which case the games are entirely predictable, or they don’t, in which case they will always lose until they learn it.

It is the simple mathematical description of the game that has led Dr. Redheffer, as he states in his 1948 article in The American Mathematician Monthly (Vol. 55 No. 6, Jun/Jul 1948), to seek to implement it in an electronic circuit. The resulting device, using only electrical switches, was presented at various science fairs and open house exhibitions.

All but forgotten after the early 1950s, it seems that American toy designer and collector Mike Mozart was the one to rediscover the actual device in 2010, and determine its historical significance. While not identified beyond any doubt, the exact consistence between the device and Dr. Redheffer’s 1948 article, as well as the lack of almost any information about it before Mozart’s discovery, seem to suggest that it is in fact the original “machine for playing the game Nim” built at M.I.T. in 1941-1942.

« Return to Part I — Prehistory

References

1936/7: Turing and Shannon

The 1930s saw the publication of two complementary landmark papers in the early history of computer science. One lays the theoretical groundwork for computers as general-purpose solvers of any algorithm. The other proposes the application of electronic (relay) circuits to problems of symbolic logic.

In his 1936 paper On Computable Numbers, with an Application to the Entscheidungsproblem, English mathematician Alan Mathison Turing introduced the concept of a simple, hypothetical “automatic machine”, later named a (universal) Turing machine. He proved such a machine to be capable of computing anything that is computable, using only a read/write head performing elementary operations (read, write, move) over an infinite memory, governed by strictly following a set of atomic rules. Since Turing showed that all computable algorithms can be reduced to such a set of rules for his conceptual machines, they serve as the basic theoretical model for a digital general-purpose computer—the ultimate hypothetical computing machine. Such machines are said to be Turing-complete, a label that applies to all modern computers.

Turing applied these ideas to show that there can be no general solution to the Entscheidungsproblem, which asks for an algorithm that can determine truth or falsity of any mathematical statement. He did so by showing that his conceptual machines are not able to solve the equivalent halting problem, which asks for an automatic machine to determine whether any other automatic machine, given its program (i.e. set of rules), will ever terminate. (The gist of it: in order to examine the machine and program to be tested, the testing machine would essentially need to emulate, or re-enact, all the steps it performs—so, if the process being examined never reaches a conclusion, neither will the process that examines it.)

These important theoretical results, and their very practical applications to the design of digital computers and programming languages, have earned Alan Turing the honourable title of “father of computer science”. Interestingly, few people recognised the importance of the work at the time of its publication. The Turing Award, the so-called “Nobel Prize of computing”, is named in his honour, and awarded annually to the most influential computer scientists.

In 1937, at the Massachusetts Institute of Technology (MIT), Claude Elwood Shannon completed his master’s thesis, of which a portion was published as a paper entitled A Symbolic Analysis of Relay and Switching Circuits in 1938. In it, Shannon proposed to consider relay switching networks as Boolean expressions, where open and closed relays represented Boolean values of 0 and 1, respectively. Boolean algebra could then be used to simplify existing arrangements, or find the most efficient implementation of new requirements. However, this is overshadowed by the implications of the reverse operation: modelling Boolean expressions as networks of relays, which could then be used to solve logic problems. This is the foundation of digital circuit design, and by extension all of electronic digital computing.

The same principle was already described two years prior, in a 1935 thesis by Russian electrical engineer Victor Shestakov. However, Shestakov’s work was not published until 1941, so the credit for creating a sound theoretical basis for digital circuit design went to Shannon instead. Experimental designs in digital circuitry had been realised earlier, but in a rather limited and ad hoc fasion. Shannon’s way of thinking paved the way for more complex logic circuits, translated to vacuum tubes and transistors, eventually leading to today’s incredibly complex integrated circuits (ICs).

Shannon continued to expand on his ideas, earning the title “father of information theory” by almost single-handedly creating the eponymous new field of science with the publication of another paper in 1948 (a separate article on the topic will follow).

Alan Turing and Claude Shannon knew each other, having met and exchanged ideas on numerous occasions.

Next page: 1940 – Nimatron
« Return to Part I — Prehistory

References

1934: Arcade machines

Arcade amusement machines had been highly popular ever since the late 19th century. Visitors to malls, drug stores, hotel lobbies, boardwalks, amusement parks, and arcade halls could get a variety of services and diversions in exchange for their pennies, nickels, and dimes. Shopkeepers considered them “silent salesmen” that not only lured patrons into their stores, but also earned them a nice additional income, all for the price of some light machine maintenance.

Early, pre-electronic arcade machines came in many different forms. People could use a penny scale to learn their weight, or receive a printed truism from a fortune teller machine. Strength testers encouraged friendly physical competition, while automatic musical instruments and early jukeboxes allowed to set the mood for a price. Some machines sold postcards and stamps, ball gum, hot peanuts, or cigarettes. Others combined several popular functions, such as scales that would also tell your fortune, or earn you your penny back should you have guessed your weight correctly. And let’s not forget slot machines or the early electro-mechanical pinball games either.

The imagination of manufacturers seemed limitless, and hundreds of new machines were released every year in North America and Western Europe. Some, such as the Mills Novelty Co. out of Chicago or the Caille Bros. in Detroit, had decades of great financial success, while countless others had to close shop after selling only a handful of their first design. So why mention 1934 in particular? Because of two machines that are documented to have been released this year, which more than those before them reminded me of the arcade videogames that would start to appear almost 40 years later: a racing game and a flight simulator.

Based in Paris, France, manufacturer Autoreflex made their debut with a machine called Le parcours automobile, which quite possibly is the first racing game ever created. The machine’s cabinet housed a roll of paper, painted with a road and obstacles. A small model car was mounted on a rod that ran across on top of it. The car could be moved left and right by turning a big steering wheel mounted in front. After inserting a coin, the paper roll would start turning, and the player had to move the car to keep it on the road and avoid any obstacles. The same, simple principle was implemented by the early racing videogames.

It is not currently documented if or how the machine determined how well the player was driving, if there was any feedback or consequences when leaving the road or hitting obstacles. A few machines are said to still exist and be in the possession of collectors though. In any case, the groundwork for one of the earliest and most enduring videogame genres was laid. Autoreflex even released a sequel called Autoroute sportive three years later, which operated on the same principle. It has been documented that this sequel recorded how many accidents a driver got themselves into.

You can see a later game based on the same principle, International Mutoscope Corp.’s Drive-Mobile from 1941, in action in the following video:

Another one of the early, enduring genres is that of the flight simulator (although these games have, due to their complexity, always been more popular on home systems and home computers than in the arcades). Not much is known about Learn to Fly, a rather complex arcade machine released by an unknown manufacturer, also in 1934. Instead of a toy car, a small airplane was suspended in a glass box. A joystick, a throttle lever, and rudder pedals provide the realistic controls. When a coin was dropped, the glass box actually turned into a miniature wind tunnel, which remained active for a certain amount of time. The player controlled the plane according to instructions on the machine, and the manufacturer claimed realistic reactions.

Apart from almost certainly being the first flight simulator game designed for the public, Learn to Fly was quite possibly also the very first joystick-controlled game in general.

Next page: 1936/7 – Turing and Shannon
« Return to Part I – Prehistory

References

1898: Magnetic storage

Valdemar Poulsen, a Danish engineer, presents his Telegraphone, which can record sound magnetically onto a thin steel wire. Although wire recording was popular only for a few years in the late 1940s to early 1950s, before magnetic tape took over, it was the first manner of magnetic data storage and paved the way for the drums and hard disks of later computers.

Recording wires are made out of steel or stainless steel, and are only about as thick as a human hair. They are pulled past the recording head at a very high speed when compared to tape recorders, with the electrical signal of the sound supplied to the head and magnetising the wire as it passes. Playing back the wire without applying a signal to the head leads to the magnetic fields on the wire inducing the signal back into the head, the recorded sound can be reproduced. Despite the high speeds of recording and playback, the thinness of the wires allows for very high capacities per spool. Wire recorders were thus sometimes able to record several hours of audio non-stop, in contrast to similarly-sized tape recorders.

The initially complex, heavy, and expensive magnetic tape recorders soon caught up to become a viable alternative for home use, and almost completely replaced wire recording during the 1950s. Magnetic tapes, first patented by Fritz Pfleumer in 1928, went on to become one of the main methods of data storage for home computers in the 1980s. But the principles first employed by Poulsen also lead to drum memories, floppy disks, and hard disk drives. Even in his 1898 patent, Poulson described that instead of a wire, one might use “a disk of magnetisable material over which the electromagnet may be conducted spirally,” or “a strip of some insulating material such as paper [covered] with a magnetisable metallic dust,” essentially predicting hard disks and magnetic strips as used on tickets or credit cards. Today, magnetic media continue to be the predominant forms of permanent computer data storage, to be rivalled only in certain applications by optical or semiconductor technologies.

Note: The Electrical World magazine issue of September 8, 1888, printed a description of magnetic recording by Oberlin Smith, who himself claimed to know about the principle from a visit to the Edison laboratories 10 years prior. However, it is not known whether any working model was ever built before Poulsen’s patented Telegraphone in 1898. In addition, Smith believed a continuous steel wire would not allow for precise magnetisation, and suggested regular wires furnished with steel dust.

Next page: 1934 – Arcade machines
« Return to Part I – Prehistory

References

1897: Cathode-ray tube

Karl Ferdinand Braun invents the Cathode-ray Tube (CRT), also known as a Braun Tube, intended for use as an oscilloscope. The technology will see varied, sustained, and wide-spread use both as a memory storage and output display device.

While there are two fundamentally different methods of operation for a CRT display, the physical principle remains the same. A cathode emits a constant stream of electrons, forming a ray, which is deflected and directed towards a screen coated with a fluorescent material. Wherever the electrons hit, the screen lights up.

In the first method of operation, rays are deflected to directly trace out the shapes that are to be displayed on screen. This is the way of CRT oscilloscopes, as well as vector displays, which have been used for a number of arcade videogames and less successful home game systems.

The other method has rays illuminating the whole screen surface by repeatedly tracing out a raster pattern, line by line. These screens are known as raster displays. The patterns to be drawn are controlled by modifying the intensity of the electron beam rather than its deflection—more electrons means a brighter dot on the screen. Raster displays also give meaning to the notion of screen resolution and refresh rate, or frame rate, commonly measured in frames per second and measuring how often this raster pattern is traced out each second. Higher frame rates reduce update delays, jerkiness of motion, and screen flicker. Colour displays have three beams separately amplified and targetting separate raster locations for each of red, green, and blue.

Up until the advent of flat-screen display technology about a hundred years later, CRT monitors were ubiquitous in television receivers and computer monitors, meaning that if computer and video games were played, they were most likely played on a CRT.

(Article updated Feb 1st, 2016)

Next page: 1898 – Magnetic storage
« Return to Part I – Prehistory

References

1886: Punch card data

Punched cards were in use as sequence control programs for automatic machines for almost a century. Charles Babbage wanted to use them to describe programs for his early computer model, the Analytical Engine, as well as use them for the input of data, rather than having operators enter the numbers directly. This idea, using punched cards to store data, was proposed even earlier, in 1832, by Ukranian Semen Nikolaevich Korsakov, who envisioned a system to store, search for, and retrieve arbitrary information. Needles in a certain configuration would be physically run over the card to determine whether a dataset adherred to a specific property, and the card retrieved if it did.

What both of these projects had in common was that they were not realised, primarily due to a lack of funding. The first person to successfully create a punch-card-based information storage and retrieval system was Herman Hollerith, who presented his Electric Tabulating System in 1884 and finished the project in 1886.

The motivation to commercialise his system was a government contract. Analysing data and preparing results for the decennial United States Census took, for its last run on data for 1880, seven years to complete, because all the work was still being done manually. Extrapolations suggested that, due to the growth in population, the 1890 census would not be completed in time before processing of the 1900 census would have to begin. Thus the U.S. government held out the prospect of a contract to the person or company who would be able to provide them with the fastest means to automate the process.

According to his own account, Herman Hollerith was inspired by observing railroad conductors, who used to encode a crude description of a passenger in the way they punched holes into their tickets, in order to prevent several people from sharing a ticket. The main element of Hollerith’s system was a tabulating machine with a card-feed mechanism which, fed with stacks of punched cards, could accumulate data and print out results automatically and accurately. Hollerith won the contract, and the performance of his machine exceeded all expectations. The 1890 census was tabulated within two and a half years, with a total population count being released to the press after only six weeks.

Unfortunately, the data and results of this historically momentous census—not only for the novel recording method used, but also for the social and political circumstances of its time—are largely lost, due to negligence and mismanagement all too familiar in government and bureaucracy. A 1921 fire destroyed around a quarter of the poorly archived, only copies of the records, and severely damaged much of the rest, which the Census Bureau consequently ordered to be destroyed against public protest in 1934, rather than keeping them around for possible restoration. A positive side effect was that the government realised the necessity for a national archive, construction of which ironically started the day before the destruction of the 1890 records was authorised by U.S. Congress.

The success of his tabulation process allowed Hollerith to found his own business, the Tabulating Machine Company, in 1896. Merging with three other corporations, it would become the Computing-Tabulating-Recording Corporation (CTR) in 1911, before being renamed to International Business Machines (IBM) in 1924. Piggybacking on their punched card system, IBM rose to be the predominant manufacturer of large-scale business computers for most of the 20th century. The concept of storing data on punched cards was the first application of computers to information processing instead of just mathematical computations, a step whose importance cannot be overemphasised.

For an interesting and touching read on computing with punched cards in the 1970s, I recommend to you Dale Fisk’s Programming With Punched Cards, written in 2005 and set predominantly in 1973.

Next page: 1897 – Cathode-ray tube
« Return to Part I – Prehistory

References

1842: The Analytical Engine

In 1822, Charles Babbage, a British inventor and engineer, started working on the design of his Difference Engine, an automatic calculation machine that included printing of the resulting tables, in order to eliminate as many sources of human error as possible. 12 years later, he extended this concept to the more general Analytical Engine. This device was in fact the world’s first design of what we would now call a general-purpose, fully programmable digital computer. The only problem with it? It arrived around 100 years too early. Various problems related to the financing and manufacturing of the machine prevented Babbage’s vision from being realised. Like his ideas, the understanding of how far-reaching and important they were went into a century-long slumber.

How closely does it resemble the architecture of a modern computer? The purely mechanical and steam-powered device would have included modules for data input and output, a memory unit, as well as an algorithmic processing unit. Programs, data to work on, and store/load instructions would have been fed to the machine on punched cards through three separate readers. Card reading motion was reversible, and the machine supported libraries of subroutines. Apart from a printer, output could be sent to a curve plotter or a card puncher, as well as triggering a signalling bell.

While the algorithmic unit, called the mill, would have operated on registers storing 40-digit numbers, the memory, also called the store, would have provided storage space for 1’000 50-digit numbers, which would translate to about 20.3 kB. Apart from basic arithmetic operations, the Assembly-like microcode language for the mill supported loops, as well as conditional jumps based on the outcome of number comparisons—Babbage’s engine would in fact have been Turing-complete.

While the general public at the time was not able to comprehend Babbage’s ideas, his scientist contemporaries mostly thought that he was a dreamer who had lost touch with the realm of the possible. One notable and famous exception was Augusta Ada Byron King, Countess of Lovelace, a young mathematician who said of herself to be “fascinated by the universality of his ideas.” Italian Federico Luigi, Conte Menabrea, wrote a report about Babbage presenting the concept of his machine in 1842. Augusta Ada King subsequently started translating his report from French to English. As King had already been a personal friend to Charles Babbage for many years, and followed the development of the Analytical Engine for the past decade, Babbage asked of her to add her own thoughts as notes to the translation—the notes ended up being three times as long as the original report.

These notes, which have long since become computer science history, show how deep King’s understanding of Babbage’s ideas actually went. Apart from explaining some of the mathematical background left out of the report for conciseness, her more interesting contribution were thoughts about the operation and application of such a machine, should it ever be realised. She realised that while previous, single-purpose calculation machines required more manual interaction of the user, an automatic general-purpose computer in turn posed an increased responsibility on its programmer to put care into their input. Computer scientists today know this dilemma as computers allowing us to “make the same mistake many times over, really fast.”

And maybe even more astonishing is the fact that Augusta Ada King was probably the first person to imagine such a device being used not only for scientific calculations, but also art and entertainment. While Charles Babbage himself apparently saw his machine exclusively as a tool for mathematicians, (although he seemingly did have plans to build a pay-for-play Tic Tac Toe machine to finance his more serious projects,) King noted that the machine doesn’t operate on numbers per se, but abstract symbols. Hence, the Analytical Engine could be used to work on any kind of data, as long as it was translated into symbols the machine could manipulate. She adds that if, for example, the “fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.” This extension of Babbage’s ideas to the arts and, in fact, general data and information processing, seems almost prophetic from today’s point of view.

A point of debate is whether the program included in her notes, an algorithm to compute the Bernoulli numbers using the Analytical Engine, was actually written or only debugged by her. Belief that the former was true has earned Augusta Ada King recognition as the first real computer programmer, and has led to the U.S. Department of Defense naming a programming language in her honour—ADA. In an autobiographic work however, Babbage states that he wrote the program himself, and King merely pointed out a mistake that he had made before adding it to her notes. On the other hand, Babbage is said to have been very miserly in giving credit to others. Whatever the case may be, the fact remains that Augusta Ada King was an exceptionally intelligent and insightful mind, an unknowing and largely unheeded herald of the computer age.

And so was Charles Babbage himself. Researching his original plans, historians and engineers came to the conclusion that Babbage would probably have been able to fully realise his Analytical Engine even with the manufacturing processes of his time, had he been able to overcome his problems in financing the project. Charles Babbage never gave up on the idea that one day, his vision would be realised: “As soon as an Analytical Engine exists, it will necessarily guide the future course of the science.” And: “Another age must be the judge […]”

Next page: 1886 – Punch card data
« Return to Part I – Prehistory

References

18th century: Cards and wires

The 18th century saw, long before anything that could be considered an automatic computer, the emergence of several concepts and technologies that would play an important role in their development.

Among those are punched cards as a method for storing programs for sequence-controlled machines. At the time, sequence control was commonly used for textile looms, where weaving patterns were defined using perforated paper rolls. Jean-Baptiste Falcon and Basile Bouchon improved upon this concept in 1728 by building a loom that was programmed using wooden punched cards, tied together with strings. This more robust and essentially modular way of storing programs caught on, was adapted to other devices and improved upon by other inventors, such as Joseph Marie Jacquard with his own loom head design in 1801. It was an especially big step forward compared to other established methods of storing control sequences, such as the tedious knotted ropes, or expensive to produce cog cylinders.

Punched cards proved to be a very successful concept. A few decades after Jacquard’s improvements, in the 1830s, Charles Babbage was planning to use them to program his unrealised Analytical Engine. Semen Korsakov and Herman Hollerith applied punched cards to the problem of storing data rather than programs. The cards were a primary way of data storage for computers far into the 20th century, to be replaced only by magnetic storage methods.

Another technology that was experimented with heavily, and fruitfully, over the course of the 18th century was the electrical telegraph. The tempting vision of sending messages over wires at the speed of light, a revolution in long-distance communication, saw many parallel strands of development. Early proposals would rely on one wire for each letter in the alphabet. It would be some decades into the 19th century until successful systems would be implemented on a usable scale, most notably the Morse telegraph by Samuel Morse and Alfred Vail in the U.S., along with the Morse code, which can be reduced to a binary code to be transmitted over a single wire.

This first form of electrical communication allowed instant transmission of messages over long distances and even between continents. The electrical telegraph is a direct predecessor to telephones and computer networks, the importance of which for the history of computers and computer games does not need to be spelled out.

As an amusing side note, 1769 saw the first presentation of a widely popular attraction built by a Hungarian inventor, Johann Wolfgang Ritter von Kempelen: The Turk, the first chess-playing automaton. Paying patrons were given the chance to play a game of chess against a humanoid machine that actually used its arms to move the pieces. Of course, the secret behind this was something less anachronistic than the first computer chess algorithm.

The ingeniously designed playing table actually housed a compartment for a human chess player, who was able to observe the contestant’s moves through magnets fitted under the chessboard. The operator in the compartment also controlled the automaton’s response by using an elaborate mechanical arm on an internal representation of the chessboard, which would translate into the figure moving its arm to carry out the moves on the outside. Details such as visible, non-functional machinery parts, a clockwork ticking sound while The Turk was “thinking”, as well as the operator winding up the machine with a key, supported the illusion. Proof that long before computers, people liked the idea of playing games against non-human, automatic opponents.

Next page: 1842 – The Analytical Engine
« Return to Part I – Prehistory

References

1st century: Automata

Heron of Alexandria, a Greek mathematician and engineer who lived approximately between 10 and 70 A.D., is considered to be the first inventor and constructor of sophisticated sequence-controlled machines. Such a machine would, once set in motion by a human operator, autonomously follow a sequence of instructions to perform specific actions. As with a program-controlled computer, this sequence could be modified by giving the machine a different program of instructions, for example encoded as knots in a rope or cogs on a cylinder. Many of Heron’s machines can be seen as predecessors of modern computers and robotics.

One of the most impressive applications of these concepts was probably Heron’s automatic theatre: a 10-minute stage play automatically controlled and performed using such a programmable machine, including the triggering of special effects such as the sound of thunder.

Sequence control is a central principle to the concept of electronic games. It is what allows a machine to present an opposition to a player, whether it be changing obstacles to overcome, or seemingly intelligent opponents. It provides the challenge that leads to fun and a sense of accomplishment. It also allows automatic enforcement of rules in puzzle-style games, or in games for several players. On some level, sequence control is what separates electronic games from traditional games.

Incidentally, Heron is also believed to be the creator of the first automatic, coin-controlled vending machine, to be placed in temples. An inserted coin fell onto a pan, which would be lowered due to its weight. The tilting pan would open a lever, causing holy water to be dispensed to the customer. The pan continued to tilt until the coin fell off, closing the lever and stopping the flow of holy water after a certain time. The combination of such vending machines with automated games would lead to coin-operated amusement devices and, almost two millennia later, arcade video games, which offered patrons a few minutes of entertainment in exchange for their coins.

Next page: 18th Century – Cards and wires
« Return to Part I – Prehistory

References: