Here's the wild thing: every time you ask an AI to write code, you're standing on top of 90 years of ideas — each one building on the last like LEGO bricks.
We'll start with a simple question that a mathematician asked in 1936:
A German mathematician named David Hilbert believed the answer was yes. He dreamed of a complete, consistent, decidable mathematics — a system where every true statement could be proved, no contradictions existed, and an algorithm could settle any question.
He was wrong. And the person who proved him wrong was a shy, brilliant 23-year-old named Alan Turing.
Alan Turing was born in London in 1912. By 1936, he was a 23-year-old Fellow of King's College, Cambridge — already recognized as brilliant, but not yet famous.
He was an unusual character. A serious long-distance runner (his marathon time of about 2 hours 46 minutes was near Olympic-qualifying level). He chained his coffee mug to the radiator to stop colleagues from borrowing it. He wore his gas mask while cycling — not for drills, but to prevent hay fever.
In 1935, Turing attended lectures by Max Newman about the Entscheidungsproblem — Hilbert's challenge. Newman explained the open question: was there a definite method, a mechanical procedure, that could decide the provability of any mathematical statement?
Something clicked in Turing's mind. According to his friend Robin Gandy, Turing later recalled lying in Grantchester Meadows that summer, staring at the sky, when the key idea came to him.
In 1928, David Hilbert and his student Wilhelm Ackermann formally posed the Entscheidungsproblem (German for "decision problem").
The question was straightforward: does there exist an algorithm that can take any mathematical statement and determine whether it's provable?
Hilbert was 66 years old and one of the most influential mathematicians alive. He believed profoundly that mathematics had no limits — that every problem was solvable. His famous rallying cry: "Wir müssen wissen. Wir werden wissen." — "We must know. We shall know."
But the irony was devastating. The day before Hilbert delivered those words in a 1930 radio address, a 24-year-old Austrian logician named Kurt Gödel had announced results at a nearby conference proving that two of Hilbert's three goals were impossible.
Here is Turing's beautiful move. He started by asking: what does a human "computer" actually do?
In 1936, the word "computer" meant a person — someone who follows rules to calculate with pencil and paper. Turing analyzed the simplest possible version of this process:
The Three Parts:
1. A tape — an infinitely long strip divided into cells. Each cell holds a symbol (0, 1, or blank). Think of it as a really long roll of graph paper.
2. A head — it can read the current cell, write a new symbol, and move one step left or right. One cell at a time. No skipping.
3. A rule table — a list of instructions like: "If you're in State A and you see a 1, write a 0, move right, and switch to State B." These rules are the program.
We'll represent numbers in unary: the number 2 is 11, the number 3 is 111. A 0 separates the two numbers.
Our tape starts as: _ 1 0 1 _ — that's 1 + 1. We want to produce 11 (which is 2).
Every Turing Machine we've seen is a specialist. One adds. Another multiplies. You'd need a separate machine for every task.
But Turing asked a beautiful question: What if the rule table itself was just data?
What if you could write a description of any Turing Machine on the tape, and build a single machine that reads that description and behaves accordingly?
This is the Universal Turing Machine. It takes two inputs:
1. A description of another machine M (its rules, states, everything)
2. The input that M is supposed to process
Then the Universal Machine simulates M step by step. One machine. Any program. Any input.
The Halting Problem: given a program P and an input I, will P eventually stop, or will it run forever?
Turing proved this is unsolvable. Here's the essence:
1. Suppose a magic machine H exists that can answer this for any program.
2. Build a devious program D that asks H: "Does this program halt when given itself as input?" — then does the opposite.
3. Run D on itself. If H says "halts" → D loops forever. If H says "loops" → D halts. Either way, H is wrong.
4. Therefore H cannot exist.
In 1931, Kurt Gödel — 25, Austrian, painfully shy — published two devastating theorems:
First Incompleteness Theorem: In any consistent mathematical system powerful enough to describe basic arithmetic, there will always be true statements the system cannot prove.
Second Incompleteness Theorem: Such a system cannot prove its own consistency.
Think of it this way: imagine a library that tries to catalog every true book. Gödel proved that any such library will always have missing shelves — true books it cannot catalog.
While Turing worked at Cambridge, Alonzo Church at Princeton independently answered the same question using a completely different system — the lambda calculus, a mathematical notation for expressing computation using pure functions.
When Turing learned of Church's work, he was devastated — scooped! But Max Newman pointed out that Turing's approach was different enough to publish. Turing then traveled to Princeton and completed his PhD under Church.
The remarkable fact: Turing's machines and Church's lambda calculus compute exactly the same class of functions. This convergence led to the Church-Turing Thesis:
Turing returned to Cambridge from Princeton in 1938. Within a year, Britain was at war. He was recruited to Bletchley Park — the top-secret code-breaking center.
Meanwhile, in Berlin, a young engineer named Konrad Zuse was building a programmable computer in his parents' living room. In Philadelphia, engineers would soon design ENIAC — a 30-ton, 17,468-vacuum-tube behemoth. And John von Neumann would read Turing's paper and realize: the Universal Turing Machine wasn't just an abstraction. It was a blueprint.