In 1936, Alan Turing's machine existed only on paper β a thought experiment to answer a question about the limits of logic. It was not designed to be built. The mathematics worked regardless.
But between 1939 and 1945, the Second World War created problems that no human mind could solve fast enough. Encrypted messages needed cracking β not tomorrow, but now. Artillery firing tables required millions of calculations. Theory had a deadline.
Across three countries, engineers and mathematicians raced to build machines that could think faster than people. They used different technologies, different designs, different philosophies. But they all converged on the same basic principle Turing had described: a machine that manipulates symbols according to rules.
The first person to build a programmable computer wasn't British or American. He was a German engineer, working alone, in his parents' apartment. And almost nobody noticed.
The Forgotten First
Konrad Zuse was born on June 22, 1910, in Berlin. He studied civil engineering, not mathematics. He had no idea Turing's paper existed. He just hated doing calculations by hand.
Starting in 1936 β the same year Turing wrote his paper β Zuse began building computing machines in his parents' apartment. His first attempt, the Z1, was purely mechanical and jammed constantly. But the concept was sound.
In 1941, Zuse completed the Z3. It was the world's first working, fully automatic, programmable digital computer. It read its programs from punched film β strips of discarded 35mm movie stock.
Konrad Zuse (1910β1995)
Background: Civil engineer (not a mathematician or physicist)
Known for: Built the first working programmable computer β in his parents' apartment
Fun fact: He used discarded 35mm movie film for punched tape because it was cheap. He also created what may be the first high-level programming language, PlankalkΓΌl, in 1945.
THE Z3 (Berlin, 1941)
βββ Technology: ~2,600 electromechanical relays
βββ Speed: Addition in 0.8s, multiplication in ~3s
βββ Memory: 64 words of 22 bits each
βββ Input: Punched 35mm film tape
βββ Weight: ~1,000 kg (about 2,200 lbs)
βββ Fate: Destroyed by Allied bombing, Dec 1943
βββ Turing-complete: Proven retroactively in 1998
Think About It: Zuse, Turing, and others were working on similar problems at the same time, on different continents, without knowing about each other. Why do you think the same kinds of ideas often emerge independently? What does that tell us about the nature of invention?
While Zuse tinkered in Berlin, a very different kind of computing was happening in the English countryside β not to solve engineering problems, but to win a war.
Cracking the Uncrackable
By 1943, the Allied code-breakers at Bletchley Park faced an impossible bottleneck. The German military's Lorenz cipher was fiendishly complex. Even after the mathematical method for breaking it was worked out, applying it by hand was agonizingly slow.
Tommy Flowers, a 38-year-old Post Office engineer, proposed a radical solution: build an electronic machine using vacuum tubes β glass bulbs that could switch electrical signals thousands of times per second. His bosses were skeptical. Flowers persisted, partly funding the work out of his own pocket.
In December 1943, Colossus Mark 1 became operational. It used approximately 1,500 vacuum tubes and could process 5,000 characters per second β reading encrypted messages from a paper tape racing through at 30 miles per hour.
COLOSSUS (Bletchley Park, 1943β1945)
βββ Designer: Tommy Flowers (Post Office engineer)
βββ Purpose: Breaking the Lorenz cipher
βββ Technology: 1,500 vacuum tubes (Mk 1), 2,400 (Mk 2)
βββ Speed: 5,000 characters/second
βββ Limitation: Special-purpose β Lorenz work only
βββ Fate: Classified. Dismantled. Secret until 1970s.
βββ Total built: 10 Colossus machines by end of war
Colossus proved that electronic computation could work at scale. Thousands of vacuum tubes, running together, doing in hours what humans needed weeks to accomplish. The machine was secret. The principle was not: electronics could replace human calculation.
After the war, Colossus was classified. Tommy Flowers was sworn to secrecy and could not claim credit for decades. He received a modest award of Β£1,000 β not even enough to cover what he had spent from his own savings.
Colossus was special-purpose β it could do one thing. Across the Atlantic, American engineers were building something more ambitious: a machine that could be reprogrammed for any calculation. It would weigh 30 tons.
Seventeen Thousand Tubes
J. Presper Eckert (just 24 when the project began) and John Mauchly (36) built ENIAC at the University of Pennsylvania. Completed in late 1945, it was the first large-scale, general-purpose electronic computer.
ENIAC could perform 5,000 additions per second. A trajectory calculation that took a human 20 hours could be done in 30 seconds.
But there was a catch. To "program" ENIAC, you physically rewired the machine. Programming meant rearranging cables and setting switches β a process that could take days or weeks. The computation itself took seconds. Setting it up took forever.
ENIAC (University of Pennsylvania, 1945)
βββ Designers: J. Presper Eckert & John Mauchly
βββ Technology: 17,468 vacuum tubes
βββ Size: 80ft long Γ 8ft tall Γ 3ft deep
βββ Weight: ~30 tons (27,000 kg)
βββ Power: 150 kilowatts
βββ Speed: 5,000 additions/second
βββ Cost: ~$500,000 (~$7M today)
βββ Programming: Physical rewiring (cables + switches)
β β Changing the program: DAYS
β β Running the program: SECONDS
βββ Reliability: A tube burned out ~every 2 days
ENIAC was built by Eckert and Mauchly. But who PROGRAMMED it? Not the men. The job of making ENIAC actually work was given to six women β and then history forgot their names.
The Women Who Were Computers
In the 1940s, the word "computer" referred to a person β usually a woman β who performed mathematical calculations by hand. When ENIAC was built, the engineering leadership considered hardware the "real" work. Programming was seen as clerical. So the job was given to six women:
These six women received no manual β because none existed. They studied ENIAC's wiring blueprints, understood its architecture from the ground up, and invented programming techniques as they went.
When ENIAC was publicly demonstrated on February 14, 1946, the women were not introduced. For decades, they were uncredited or identified only as "refrigerator ladies" β assumed to be models posing next to the machine. It was not until the 1980s and 1990s that their contributions were recognized.
Programming was invented before it had a name. The ENIAC women didn't just follow instructions β they created the discipline of programming from scratch. They broke problems into steps, managed data flow, and debugged a machine with 17,468 vacuum tubes and no error messages. Every programmer today stands on their shoulders.
Think About It: The ENIAC women were left out of history for decades because their work was classified as "clerical" rather than "engineering." Why do you think certain kinds of work get labeled as less important? How does that shape whose stories get told?
The Blueprint Inside the Machine
John von Neumann was possibly the most brilliant mathematician alive by the mid-1940s. Born in Budapest in 1903, he made foundational contributions to quantum mechanics, game theory, and set theory β all before age 40.
In 1944, von Neumann visited the ENIAC project. He immediately grasped both its power and its critical limitation: the machine was fast, but changing what it did was agonizingly slow.
Von Neumann had read Turing's 1936 paper. He understood the Universal Turing Machine. The insight was radical in its simplicity: store the program in the same memory as the data.
The stored-program concept is the reason you can install new apps on your phone. The hardware doesn't change β the instructions in memory do. Von Neumann (building on Turing's insight) turned computers from expensive, single-purpose calculators into general-purpose machines that could do anything, just by loading different software.
A critical note: the ideas were developed collaboratively with Eckert, Mauchly, and others. The stored-program concept is often called the "von Neumann architecture," but it built on work by Turing, Eckert, Mauchly, and others.
The first machine to actually run a stored program was the Manchester Baby (SSEM), built by Frederic Williams and Tom Kilburn, on June 21, 1948. In Cambridge, Maurice Wilkes built EDSAC, which became the first practical stored-program computer designed for regular use in May 1949.
Store the program in memory. Brilliant. But HOW do you organize the machine to make this work? Von Neumann drew a diagram β five boxes and some arrows β that still describes every computer you have ever used.
Five Boxes That Run the World
The elegance of this design is that it separates what the machine does (defined by the program in memory) from how the machine is built (the hardware). Build the hardware once. Change the software forever.
Step 0 / 3
The von Neumann architecture is the most successful engineering blueprint in history. Every general-purpose computer since 1945 follows this basic design: memory holds both programs and data; the CPU fetches, decodes, and executes instructions; input/output connects the machine to the world. Five boxes. Some arrows. The foundation of the digital age.
Think About It: Your phone follows the von Neumann architecture. Its memory holds both apps (programs) and your photos (data). Its CPU executes instructions billions of times per second. Can you trace the five components in a device you use every day?
Smaller, Faster, Cooler
On December 23, 1947, at Bell Labs in New Jersey, John Bardeen, Walter Brattain, and William Shockley demonstrated the first working transistor. A small electrical signal at one terminal could control a much larger current flowing between the other two. ON/OFF. 1/0. A switch β with no vacuum, no filament, no glass bulb, and almost no heat.
They received the Nobel Prize in Physics in 1956. By the late 1950s, transistors had begun replacing vacuum tubes in computers.
The transistor didn't change what computers could compute β Turing had already defined that in 1936. It changed what was physically possible to build. Smaller switches meant smaller machines. Less heat meant more switches packed together. The transistor turned computing from a room-sized activity into something that could eventually fit in your pocket.
A Thousand Transistors on a Fingertip
In 1958, Jack Kilby at Texas Instruments built the first integrated circuit (IC): all components fabricated from a single piece of semiconductor material. Independently, Robert Noyce at Fairchild Semiconductor had the same idea β with a better manufacturing process. Both are credited as co-inventors.
The integrated circuit solved the tyranny of numbers by putting everything on one chip. The transistor made computers smaller. The IC made them scalable. Together, they set computing on an exponential growth curve that continued for over sixty years.
A Nightmare in Binary
By the late 1950s, talking to a computer meant speaking its language β binary. Every instruction, every piece of data had to be expressed as patterns of 1s and 0s:
PROGRAMMING IN BINARY (machine code):
To add two numbers at addresses 12 and 13,
and store the result at address 14:
00010001 00001100 β Load address 12
00010010 00001101 β Add address 13
00010011 00001110 β Store at address 14
One mistyped bit = a broken program.
Finding the error? Good luck.
βββββββββββββββββββββββββββββββββββββββββ
THE SAME PROGRAM, in a human-readable language
(preview of Issue 3):
RESULT = A + B
One line. Crystal clear. A "compiler" translates
it into binary for you.
The machines were ready. The human interface was not. What computing needed was not faster hardware β it needed a way for humans and machines to communicate. A bridge between human thought and machine code.
In twenty years (1936β1958), computing went from theory on paper to millions of components on a chip. The hardware pioneers solved three fundamental problems: they made electronic computation work (vacuum tubes), designed a universal architecture (von Neumann), and made components small and reliable (transistors and ICs). But the hardest problem β making computers usable by ordinary humans β was still ahead.
Think About It: Every layer of computing technology solved one problem and revealed the next. Vacuum tubes proved electronics could compute but were unreliable. Transistors fixed reliability but created wiring complexity. ICs fixed wiring but computing was still locked in binary. Why do you think progress often works this way β solving one problem only to uncover a deeper one?