Somewhere beneath your feet, right now, electrons are moving. They are flowing through transistors โ billions of them โ switching on and off billions of times per second.
Above those transistors, an operating system is juggling thousands of tasks. Above the OS, a programming language is translating human ideas into machine instructions. Above that, a neural network is predicting the next word you might want to read. And above that neural network, an AI agent might be writing code, running tests, and fixing its own mistakes.
Every layer depends on the one below it. Every layer hides the complexity beneath it and presents a simpler interface to the layer above. This idea โ abstraction โ is the single most powerful concept in the history of computing.
Every one of those layers was somebody's life work. Let's meet them all โ one more time.
Here is the computing abstraction stack โ every major layer of technology between the electrons in your device and the words on your screen, laid out from bottom to top.
Each layer hides the complexity below it and exposes a simpler interface above it. A programmer writing Python does not need to understand quantum tunneling. An AI agent planning a coding task does not need to think about memory allocation. This is not a convenience โ it is the mechanism by which computing scales.
As the computer scientist Edsger Dijkstra once said: "The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise."
Or, in a formulation often attributed to David Wheeler: "All problems in computer science can be solved by another level of indirection." The computing stack is living proof.
But what IS abstraction, really? And why is it so powerful? Let's make sure we truly understand it.
Every layer in the computing stack does the same thing the gas pedal does: it hides overwhelming complexity and exposes a simple interface.
When Grace Hopper built the first compiler in 1952 (Issue 3), she created an abstraction. Programmers no longer needed to think in binary. When Thompson and Ritchie built Unix in 1969 (Issue 4), they created an abstraction. Software no longer needed to know the specific hardware. When Vaswani and colleagues published the Transformer in 2017 (Issue 7), they created an abstraction. Users no longer needed to write code at all.
Each abstraction is an act of trust. You trust that the layer below you works correctly. And mostly, that trust is justified. But not always.
In 2002, Joel Spolsky coined the term "the Law of Leaky Abstractions": all non-trivial abstractions, to some degree, are leaky. Sometimes the layers below you bleed through. A performance problem in your AI agent traces back to GPU architecture. A security vulnerability traces back to memory management in C. Grace Hopper used to hand out 11.8-inch pieces of wire to admirals, representing the distance light travels in one nanosecond, to explain why satellite communication had latency. The physical world always leaks through.
Understanding the full stack does not mean you need to be an expert in every layer. It means you have diagnostic superpowers. When something breaks, you know where to look.
Now let's walk through the people who built the stack โ every key figure from all nine previous issues โ and see how their work connects into a single story.
Here they are. The builders of the computing stack, in the order they appear in our story. Notice how each generation inherits what the previous generation built โ and then extends it one layer higher. (The stack has more than eight layers โ we have simplified. Networking, databases, and security each deserve their own layer.)
Notice, too, who is missing from the fame. The six women who programmed ENIAC were erased from photographs and forgotten for decades. Tommy Flowers spent a thousand pounds of his own money building Colossus, then was silenced by the Official Secrets Act. Konrad Zuse built the Z3 in his parents' living room in wartime Berlin โ and it was destroyed in a bombing raid.
History is not always fair. But their work endured.
We have seen who built the stack and how the layers connect. But what is still missing? What problems remain unsolved?
For all its achievements, computing in 2026 faces a set of genuinely unsolved problems. These are not minor engineering bugs. They are deep, structural challenges that will shape how AI and computing evolve over the coming decades.
"Will AI do what we MEAN, not just what we SAY?"
Stuart Russell's "King Midas" analogy: the genie gave Midas exactly what he asked for โ and it was a disaster. The danger is not rebellion. It is competent pursuit of the wrong objective.
"Can we understand WHY an AI produces a specific output?"
Like a doctor who always gives the right diagnosis but cannot explain their reasoning. Anthropic has made progress identifying individual features inside Claude, but the field is far from guarantees.
"Can we make powerful AI without boiling the planet?"
Training GPT-3 consumed approximately 1,287 MWh of electricity (Patterson et al., 2021). Efficiency improvements are closing the gap, but the tension between capability and cost is real.
"Whose values, whose data, whose language does AI reflect?"
AI learns from data written by humans โ and humans have biases. As Emily Bender, Timnit Gebru, and colleagues documented in "On the Dangers of Stochastic Parrots" (2021), larger models can amplify rather than neutralize these biases.
These problems are real. But they are not hopeless. Some of the most promising approaches come from the same principle that built the entire stack: abstraction. Let's look at how.
Anthropic, the company that builds Claude, was founded in 2021 by Dario and Daniela Amodei, along with colleagues who left OpenAI, on a specific bet: that safety research and capability research are complementary, not opposed.
Their core innovation is Constitutional AI (published 2022). Instead of relying entirely on human reviewers to evaluate every AI output, give the AI a set of principles โ a constitution โ and train it to critique and revise its own outputs against those principles.
RLHF โ Reinforcement Learning from Human Feedback โ remains important too. Human volunteers compare pairs of AI responses and say which is better. But RLHF has limitations: it depends on who the volunteers are, and it can train models to be sycophantic โ agreeing with users rather than challenging incorrect beliefs.
This is why Constitutional AI matters: the principles are human-authored and auditable. You can read them. You can debate them. You can change them. The meta-question โ "Who writes the constitution?" โ is irreducibly social and political. But at least the question is visible.
Google DeepMind, OpenAI, and academic labs are pursuing complementary approaches including RLHF variants, interpretability research, and evaluation frameworks.
Researchers like Jan Leike, who left OpenAI in 2024 to lead alignment work at Anthropic, represent the growing conviction that safety must be built in, not bolted on.
Safety research is about what AI should NOT do. But what about what AI SHOULD do โ alongside us? Will AI replace humans, or make them more powerful?
Every major technology shift triggers the same fear: "This will make us obsolete." And every time, the answer is more complicated than either side wants to admit.
When ATMs arrived in the 1970s, many predicted bank tellers would vanish. Instead, ATMs made it cheaper to run a branch. Banks opened MORE branches. The number of tellers in the US actually rose โ from roughly 300,000 in 1970 to roughly 600,000 by 2010. Tellers stopped counting cash and started doing relationship management.
When spreadsheets automated calculations in the 1980s, the number of accountants increased, because spreadsheets made financial analysis accessible to more businesses.
When photography arrived in the 1840s, it freed painters from realism, leading to Impressionism, Expressionism, and Abstract art โ arguably the most creative period in art history.
But the pattern is not always painless. The Luddites of the 1810s destroyed textile machinery because automation genuinely did eliminate their specific skilled jobs. History's lesson is not "everything works out." It is "the long-term outcome can be positive while the short-term transition is painful."
A 2022 GitHub study found developers using Copilot completed tasks 55% faster โ but they spent more time on design and code review. A 2023 Harvard/BCG study found consultants using GPT-4 produced 40% higher-quality work โ but only within the model's capability frontier. For tasks outside that frontier, consultants using AI performed WORSE.
Ethan Mollick calls this the "jagged frontier": AI is superhuman at some tasks and below average at others, and the boundary is hard to predict. Human judgment about WHEN to use AI is one of the most valuable skills of the coming decade.
AI and humans, together, are more powerful than either alone. But that raises a question: what comes next? Where does the stack go from here?
One of the most persistent myths in computing is that you need to start young, or start with a math degree. The actual history tells a different story.
John Backus, who led the creation of FORTRAN, was a mediocre student who stumbled into computing after failing to complete a medical degree. Mary Kenneth Keller was a Catholic nun who earned one of the first computer science PhDs in the US. Ken Thompson wrote Unix in three weeks while his wife was on vacation. Tim Berners-Lee was a physicist who wanted a better way to share documents.
The computing stack is not finished. It is still being built. And the next layers will not be built by one kind of person. The problems ahead โ alignment, interpretability, bias, energy efficiency โ require people who understand ethics, policy, design, culture, and lived experiences.
You do not need permission to start. You need curiosity and a willingness to build things, break things, and learn from both.
Your First Step โ Talk to an AI Model (Python):
Before Tera says goodbye, there is one more thing: a reading list. Because every issue we have written is just the beginning.
We have covered ninety years in ten issues. Every topic deserves a deeper dive. Here is where to go next.
| Book | Why Read It | Issues |
|---|---|---|
| Code โ Charles Petzold (1999/2022) | The single best book for understanding how computers work from first principles. Start here. | 1-4 |
| The Annotated Turing โ Charles Petzold (2008) | Line-by-line walkthrough of Turing's 1936 paper. For when you want to go deep. | 1 |
| Alan Turing: The Enigma โ Andrew Hodges (1983/2014) | The definitive biography. Makes Turing feel like a real person. | 1 |
| The Mythical Man-Month โ Frederick P. Brooks Jr. (1975) | Why adding people to a late project makes it later. Timeless wisdom on collaboration and complexity. | 4-5 |
| Human Compatible โ Stuart Russell (2019) | The best accessible book on the alignment problem. | 10 |
| The Alignment Problem โ Brian Christian (2020) | Comprehensive overview of bias, fairness, and alignment in AI. | 6-10 |
| Atlas of AI โ Kate Crawford (2021) | Material, social, and political costs behind AI. Essential counterbalance. | 6-10 |
| Co-Intelligence โ Ethan Mollick (2024) | Practical guide to human-AI collaboration. | 7-10 |
| Course | Where | Best For |
|---|---|---|
| 3Blue1Brown โ "Neural Networks" | YouTube | Visual learners (Issue 6) |
| Karpathy โ "Zero to Hero" | YouTube | Build neural nets from scratch (Issues 6-7) |
| Andrew Ng โ "Machine Learning" | Coursera (free) | Classic ML intro (Issue 6) |
| fast.ai โ "Practical Deep Learning" | fast.ai | Build first, theory second (Issues 6-7) |
| CS50 โ "Intro to CS" | Harvard/edX (free) | Computing fundamentals (Issues 1-5) |
| Anthropic Research Blog | anthropic.com/research | Safety and interpretability (Issues 7-10) |
One more page. Tera has something to say.
Alan Turing imagined a machine in 1936. Ninety years later, that machine's descendants can read, write, code, reason, and collaborate in teams.
But the story of computing has never been about machines. It has been about people โ curious, stubborn, brilliant people who looked at the impossible and said, "What if?"
This series has told the story primarily through American and European pioneers. But computing is a global story โ Japanese semiconductor engineers, Indian algorithm researchers, Chinese AI labs, and countless contributors worldwide have shaped the stack.
What if a machine could compute anything? (Turing, 1936.)
What if we stored the program in memory? (Von Neumann, 1945.)
What if computers could understand English? (Hopper, 1952.)
What if small tools could be piped together? (McIlroy, Thompson, Ritchie, 1969.)
What if all the world's knowledge was linked? (Berners-Lee, 1989.)
What if machines could learn from data? (Hinton, LeCun, Bengio, 1986-2012.)
What if AI could understand language? (Vaswani et al., 2017.)
What if AI could use tools? (2023-2025.)
What if agents could work in teams? (2025-2026.)
The next "what if" is yours.
There is no next issue. There is only what you do with what you have learned.
Thank you for reading. Now go build something.