From Turing to LLMs and Beyond ยท Issue 10 of 10 ยท The Finale
Issue 10 ยท 1936โ€“2026 and Beyond

The Whole Stack โ€” and What Comes Next

โ† Previous Issue: The Swarm

We've traveled ninety years together โ€” from a mathematician's daydream to swarms of AI agents building software in parallel. We've met the people, followed the ideas, watched the machines get smaller and smarter. But we've never seen the WHOLE thing at once. Tera
Layer 1: Physics Electrons, quantum mechanics, electromagnetism Layer 2: Hardware Transistors, logic gates, processors, memory Layer 3: Operating Systems Unix, Linux, file systems, process scheduling Layer 4: Programming Languages FORTRAN, C, Python, JavaScript, compilers Layer 5: Machine Learning Neural networks, backpropagation, ImageNet Layer 6: Large Language Models Transformers, GPT, Claude, natural language Layer 7: Agents ReAct loop, tool use, Claude Code Layer 8: Swarms Multi-agent coordination, file protocols Layer ?: What you build next Machine- friendly Human- friendly 90 years of abstraction

Somewhere beneath your feet, right now, electrons are moving. They are flowing through transistors โ€” billions of them โ€” switching on and off billions of times per second.

Above those transistors, an operating system is juggling thousands of tasks. Above the OS, a programming language is translating human ideas into machine instructions. Above that, a neural network is predicting the next word you might want to read. And above that neural network, an AI agent might be writing code, running tests, and fixing its own mistakes.

Every layer depends on the one below it. Every layer hides the complexity beneath it and presents a simpler interface to the layer above. This idea โ€” abstraction โ€” is the single most powerful concept in the history of computing.

In this final issue, we are going to see the whole stack. All of it. From physics to your prompt. And then we are going to look at what comes next โ€” and what role you might play in it.

Every one of those layers was somebody's life work. Let's meet them all โ€” one more time.

This is the map. Eight layers, ninety years, hundreds of brilliant people. Every layer does one miraculous thing: it takes something impossibly complicated and makes it simple enough for the next layer to build on top. That is the whole trick. Tera

Here is the computing abstraction stack โ€” every major layer of technology between the electrons in your device and the words on your screen, laid out from bottom to top.

Each layer hides the complexity below it and exposes a simpler interface above it. A programmer writing Python does not need to understand quantum tunneling. An AI agent planning a coding task does not need to think about memory allocation. This is not a convenience โ€” it is the mechanism by which computing scales.

Layer 1 / 8

As the computer scientist Edsger Dijkstra once said: "The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise."

Or, in a formulation often attributed to David Wheeler: "All problems in computer science can be solved by another level of indirection." The computing stack is living proof.

The abstraction stack is not a ladder of prestige โ€” every layer is essential, and every layer was someone's life work. The reason you can type a question in plain English and get an intelligent answer is that hundreds of thousands of people, across ninety years, each solved one layer's problems well enough that the next layer could exist.

But what IS abstraction, really? And why is it so powerful? Let's make sure we truly understand it.

Abstraction sounds like a fancy word. It is not. You use it every single day. When you press the gas pedal, you don't think about fuel injectors or crankshafts. The pedal HIDES all that. Push harder, go faster. That is abstraction. Tera

Every layer in the computing stack does the same thing the gas pedal does: it hides overwhelming complexity and exposes a simple interface.

When Grace Hopper built the first compiler in 1952 (Issue 3), she created an abstraction. Programmers no longer needed to think in binary. When Thompson and Ritchie built Unix in 1969 (Issue 4), they created an abstraction. Software no longer needed to know the specific hardware. When Vaswani and colleagues published the Transformer in 2017 (Issue 7), they created an abstraction. Users no longer needed to write code at all.

Each abstraction is an act of trust. You trust that the layer below you works correctly. And mostly, that trust is justified. But not always.

DRIVING A CAR COMPUTING STACK You press the gas pedal You type a prompt Fuel injection fires (hidden) LLM processes tokens (hidden) Combustion ignites fuel (hidden) Neural net multiplies matrices (hidden) Crankshaft turns wheels (hidden) OS schedules GPU tasks (hidden) Car moves forward. Answer appears on screen. Same pattern. Hide complexity. Expose a simple interface.

In 2002, Joel Spolsky coined the term "the Law of Leaky Abstractions": all non-trivial abstractions, to some degree, are leaky. Sometimes the layers below you bleed through. A performance problem in your AI agent traces back to GPU architecture. A security vulnerability traces back to memory management in C. Grace Hopper used to hand out 11.8-inch pieces of wire to admirals, representing the distance light travels in one nanosecond, to explain why satellite communication had latency. The physical world always leaks through.

Understanding the full stack does not mean you need to be an expert in every layer. It means you have diagnostic superpowers. When something breaks, you know where to look.

Abstraction is not simplification. Simplification throws away detail. Abstraction HIDES detail behind a clean interface while keeping all the power underneath. It lets you be "absolutely precise" (Dijkstra) at your own level without drowning in every level below.
Think About It: Can you think of other abstractions in daily life? A light switch hides the power grid. A restaurant menu hides the kitchen. Money hides a complex web of trust and accounting. What is the most important abstraction in YOUR daily life?

Now let's walk through the people who built the stack โ€” every key figure from all nine previous issues โ€” and see how their work connects into a single story.

Every person on this timeline gave years โ€” sometimes decades โ€” of their life to solving one piece of the puzzle. Some became famous. Some were nearly forgotten. But every single one stood on the shoulders of the people who came before. Tera

Here they are. The builders of the computing stack, in the order they appear in our story. Notice how each generation inherits what the previous generation built โ€” and then extends it one layer higher. (The stack has more than eight layers โ€” we have simplified. Networking, databases, and security each deserve their own layer.)

Notice, too, who is missing from the fame. The six women who programmed ENIAC were erased from photographs and forgotten for decades. Tommy Flowers spent a thousand pounds of his own money building Colossus, then was silenced by the Official Secrets Act. Konrad Zuse built the Z3 in his parents' living room in wartime Berlin โ€” and it was destroyed in a bombing raid.

History is not always fair. But their work endured.

1 / 20
This timeline spans one long human lifetime. Someone born the year Turing published "On Computable Numbers" (1936) would have been 86 when ChatGPT launched (2022). The entire history of computing fits within the span of a single life.

We have seen who built the stack and how the layers connect. But what is still missing? What problems remain unsolved?

I have spent nine issues showing you how brilliant these ideas are. Now I owe you something equally important: honesty about what we do NOT know. Computing has solved astonishing problems. But it has also created new ones โ€” and some are urgent. Tera

For all its achievements, computing in 2026 faces a set of genuinely unsolved problems. These are not minor engineering bugs. They are deep, structural challenges that will shape how AI and computing evolve over the coming decades.

1. The Alignment Problem

"Will AI do what we MEAN, not just what we SAY?"

Stuart Russell's "King Midas" analogy: the genie gave Midas exactly what he asked for โ€” and it was a disaster. The danger is not rebellion. It is competent pursuit of the wrong objective.

Status: Active research | Stakes: Very high

2. Interpretability

"Can we understand WHY an AI produces a specific output?"

Like a doctor who always gives the right diagnosis but cannot explain their reasoning. Anthropic has made progress identifying individual features inside Claude, but the field is far from guarantees.

Status: Early progress | Stakes: High

3. Energy & Compute

"Can we make powerful AI without boiling the planet?"

Training GPT-3 consumed approximately 1,287 MWh of electricity (Patterson et al., 2021). Efficiency improvements are closing the gap, but the tension between capability and cost is real.

Status: Improving fast | Stakes: Global

4. Bias & Fairness

"Whose values, whose data, whose language does AI reflect?"

AI learns from data written by humans โ€” and humans have biases. As Emily Bender, Timnit Gebru, and colleagues documented in "On the Dangers of Stochastic Parrots" (2021), larger models can amplify rather than neutralize these biases.

Status: Social + technical | Stakes: Justice
Think About It: Each of these problems is partly technical and partly social. The alignment problem requires not just better algorithms but better definitions of what humans actually value. Bias requires not just better data but more diverse teams. Can you think of a technology problem that is ONLY technical, with no social dimension?

These problems are real. But they are not hopeless. Some of the most promising approaches come from the same principle that built the entire stack: abstraction. Let's look at how.

When people hear 'AI safety,' they picture Terminator robots. Forget that. The real risk is much more boring and much more real: AI systems that faithfully optimize for the wrong thing. Real safety requires something more nuanced than simple rules. Tera

Anthropic, the company that builds Claude, was founded in 2021 by Dario and Daniela Amodei, along with colleagues who left OpenAI, on a specific bet: that safety research and capability research are complementary, not opposed.

Their core innovation is Constitutional AI (published 2022). Instead of relying entirely on human reviewers to evaluate every AI output, give the AI a set of principles โ€” a constitution โ€” and train it to critique and revise its own outputs against those principles.

Step 1 / 6

RLHF โ€” Reinforcement Learning from Human Feedback โ€” remains important too. Human volunteers compare pairs of AI responses and say which is better. But RLHF has limitations: it depends on who the volunteers are, and it can train models to be sycophantic โ€” agreeing with users rather than challenging incorrect beliefs.

This is why Constitutional AI matters: the principles are human-authored and auditable. You can read them. You can debate them. You can change them. The meta-question โ€” "Who writes the constitution?" โ€” is irreducibly social and political. But at least the question is visible.

Google DeepMind, OpenAI, and academic labs are pursuing complementary approaches including RLHF variants, interpretability research, and evaluation frameworks.

Researchers like Jan Leike, who left OpenAI in 2024 to lead alignment work at Anthropic, represent the growing conviction that safety must be built in, not bolted on.

AI safety is not about preventing robot rebellions. It is about the hardest engineering problem in history: building systems that pursue human goals in a world where human goals are complex, contradictory, and context-dependent. Constitutional AI makes the problem visible, auditable, and improvable. That is a start.

Safety research is about what AI should NOT do. But what about what AI SHOULD do โ€” alongside us? Will AI replace humans, or make them more powerful?

Will AI replace programmers? Writers? Artists? Let me tell you what history says โ€” because this is not the first time humanity has faced this question. Not even close. Tera

Every major technology shift triggers the same fear: "This will make us obsolete." And every time, the answer is more complicated than either side wants to admit.

When ATMs arrived in the 1970s, many predicted bank tellers would vanish. Instead, ATMs made it cheaper to run a branch. Banks opened MORE branches. The number of tellers in the US actually rose โ€” from roughly 300,000 in 1970 to roughly 600,000 by 2010. Tellers stopped counting cash and started doing relationship management.

When spreadsheets automated calculations in the 1980s, the number of accountants increased, because spreadsheets made financial analysis accessible to more businesses.

When photography arrived in the 1840s, it freed painters from realism, leading to Impressionism, Expressionism, and Abstract art โ€” arguably the most creative period in art history.

But the pattern is not always painless. The Luddites of the 1810s destroyed textile machinery because automation genuinely did eliminate their specific skilled jobs. History's lesson is not "everything works out." It is "the long-term outcome can be positive while the short-term transition is painful."

LESS AUTONOMY MORE AUTONOMY Tool Calculator, spell-check Human decides everything Issues 1-5 Assistant Autocomplete, Grammarly Human decides, AI suggests Issue 8 Partner ChatGPT, Copilot for writing/code Human sets goals, reviews outputs Issues 7-8 Supervised Agent Claude Code: files, tests, fixes Human can intervene Issues 8-9 Autonomous Agent Agent swarms building in parallel Human sets high-level goals Issue 9+ More autonomy = more productivity, but less oversight. The right balance depends on the stakes.

A 2022 GitHub study found developers using Copilot completed tasks 55% faster โ€” but they spent more time on design and code review. A 2023 Harvard/BCG study found consultants using GPT-4 produced 40% higher-quality work โ€” but only within the model's capability frontier. For tasks outside that frontier, consultants using AI performed WORSE.

Ethan Mollick calls this the "jagged frontier": AI is superhuman at some tasks and below average at others, and the boundary is hard to predict. Human judgment about WHEN to use AI is one of the most valuable skills of the coming decade.

The strongest evidence suggests AI is most powerful as an augmentation tool โ€” it raises the floor more than it raises the ceiling. New tools do not replace thinking. They elevate it to a higher layer of abstraction. The same pattern the entire series has documented.

AI and humans, together, are more powerful than either alone. But that raises a question: what comes next? Where does the stack go from here?

Here is the part that matters most. Every person on that timeline started exactly where you are right now: curious. Grace Hopper was told no one would use a compiler. Linus said his OS was 'just a hobby.' Fei-Fei Li labeled millions of images because she believed the data would matter. They just started. Tera

One of the most persistent myths in computing is that you need to start young, or start with a math degree. The actual history tells a different story.

John Backus, who led the creation of FORTRAN, was a mediocre student who stumbled into computing after failing to complete a medical degree. Mary Kenneth Keller was a Catholic nun who earned one of the first computer science PhDs in the US. Ken Thompson wrote Unix in three weeks while his wife was on vacation. Tim Berners-Lee was a physicist who wanted a better way to share documents.

The computing stack is not finished. It is still being built. And the next layers will not be built by one kind of person. The problems ahead โ€” alignment, interpretability, bias, energy efficiency โ€” require people who understand ethics, policy, design, culture, and lived experiences.

You do not need permission to start. You need curiosity and a willingness to build things, break things, and learn from both.

YOU ARE HERE: Curious. No experience required. BUILD Learn to code. Build a website. Try Claude Code. Contribute to open source. Layers 3-4-8 UNDERSTAND Study how AI works. Take a course on ML. Read papers. Watch 3Blue1Brown. Layers 5-6-7 SHAPE Work on the social side. AI ethics, policy, fairness, education. Write about AI for your community. All layers Pick one path. Just one. And start.

Your First Step โ€” Talk to an AI Model (Python):

# This is a real API call to Claude. Seven lines. # All you need: Python, an API key, and curiosity. import anthropic client = anthropic.Anthropic() message = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[{"role": "user", "content": "Explain one layer of the computing stack."}] ) print(message.content[0].text)
Think About It: Which layer of the stack interests you most? Are you drawn to the physics of hardware? The elegance of programming languages? The mystery of how neural networks learn? The social challenges of AI safety? There is no wrong answer. The stack needs people at every level.

Before Tera says goodbye, there is one more thing: a reading list. Because every issue we have written is just the beginning.

Everything in this series is a starting point, not an ending point. Here are the resources I trust most. Pick one. And start. Tera

We have covered ninety years in ten issues. Every topic deserves a deeper dive. Here is where to go next.

Books โ€” The Essential Shelf

BookWhy Read ItIssues
Code โ€” Charles Petzold (1999/2022)The single best book for understanding how computers work from first principles. Start here.1-4
The Annotated Turing โ€” Charles Petzold (2008)Line-by-line walkthrough of Turing's 1936 paper. For when you want to go deep.1
Alan Turing: The Enigma โ€” Andrew Hodges (1983/2014)The definitive biography. Makes Turing feel like a real person.1
The Mythical Man-Month โ€” Frederick P. Brooks Jr. (1975)Why adding people to a late project makes it later. Timeless wisdom on collaboration and complexity.4-5
Human Compatible โ€” Stuart Russell (2019)The best accessible book on the alignment problem.10
The Alignment Problem โ€” Brian Christian (2020)Comprehensive overview of bias, fairness, and alignment in AI.6-10
Atlas of AI โ€” Kate Crawford (2021)Material, social, and political costs behind AI. Essential counterbalance.6-10
Co-Intelligence โ€” Ethan Mollick (2024)Practical guide to human-AI collaboration.7-10

Free Online Courses

CourseWhereBest For
3Blue1Brown โ€” "Neural Networks"YouTubeVisual learners (Issue 6)
Karpathy โ€” "Zero to Hero"YouTubeBuild neural nets from scratch (Issues 6-7)
Andrew Ng โ€” "Machine Learning"Coursera (free)Classic ML intro (Issue 6)
fast.ai โ€” "Practical Deep Learning"fast.aiBuild first, theory second (Issues 6-7)
CS50 โ€” "Intro to CS"Harvard/edX (free)Computing fundamentals (Issues 1-5)
Anthropic Research Bloganthropic.com/researchSafety and interpretability (Issues 7-10)

Hands-On Projects (no experience required)

  1. Build a personal website with HTML, CSS, and JavaScript โ€” experience the web stack.
  2. Try the Claude API โ€” write the Python script from Page 8.
  3. Train a tiny neural network from scratch โ€” follow Karpathy's micrograd tutorial.
  4. Contribute to open source on GitHub โ€” experience collaborative development.
  5. Experiment with prompt engineering โ€” test what LLMs can and cannot do.
  6. Try Claude Code โ€” experience the agent paradigm firsthand.
  7. Write about AI for your community โ€” you contributing to the stack.
Every expert in every field started exactly where you are: knowing nothing, but curious enough to begin. The distance between "I know nothing about computing" and "I am building something" is smaller than it has ever been. The stack is taller than ever โ€” and that means you can stand on more shoulders than any previous generation.

One more page. Tera has something to say.

We started with a question: can a machine solve any math problem? A 23-year-old named Turing proved the answer is no โ€” and invented the blueprint for every computer. Then others built real machines. Then Hopper taught them human languages. Then Thompson and Ritchie wrote Unix. Then Berners-Lee connected all knowledge. Then Hinton, LeCun, and Bengio proved machines could learn. Then Transformers changed everything. Then AI learned to use tools. Then swarms started building together. Ninety years. Hundreds of people. Eight layers. Each one built on the last. And the stack is not finished. The next layers will be built by people like you. I do not know what you will build. But understanding where computing came from gives you the power to shape where it goes. You have the map. The rest is up to you. Tera

Alan Turing imagined a machine in 1936. Ninety years later, that machine's descendants can read, write, code, reason, and collaborate in teams.

But the story of computing has never been about machines. It has been about people โ€” curious, stubborn, brilliant people who looked at the impossible and said, "What if?"

This series has told the story primarily through American and European pioneers. But computing is a global story โ€” Japanese semiconductor engineers, Indian algorithm researchers, Chinese AI labs, and countless contributors worldwide have shaped the stack.

What if a machine could compute anything? (Turing, 1936.)
What if we stored the program in memory? (Von Neumann, 1945.)
What if computers could understand English? (Hopper, 1952.)
What if small tools could be piped together? (McIlroy, Thompson, Ritchie, 1969.)
What if all the world's knowledge was linked? (Berners-Lee, 1989.)
What if machines could learn from data? (Hinton, LeCun, Bengio, 1986-2012.)
What if AI could understand language? (Vaswani et al., 2017.)
What if AI could use tools? (2023-2025.)
What if agents could work in teams? (2025-2026.)

The next "what if" is yours.

Layer ?: What comes next is not yet written. That might be you. Layer 8: Swarms โ€” Issue 9 Layer 7: Agents โ€” Issue 8 Layer 6: LLMs โ€” Issue 7 Layer 5: ML โ€” Issue 6 Layer 4: Languages โ€” Issue 3 Layer 3: OS โ€” Issue 4 Layer 2: Hardware โ€” Issue 2 Layer 1: Physics | Layer 0: Theory โ€” Issue 1
The history of computing is the history of people asking "What if?" โ€” and then building the answer. Each answer became a layer that the next question could stand on. Understanding the stack is not just knowledge. It is power. The power to see where we are, how we got here, and where we might go next.
Think About It: Turing's 1950 prediction: "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." He was roughly right. The question has shifted from "Can machines think?" to "What do we do about machines that seem to think?" What is YOUR answer? What do YOU think should come next?

There is no next issue. There is only what you do with what you have learned.

Thank you for reading. Now go build something.

You have reached the end of the series.
Start from the beginning โ†’

References & Further Reading