The Thinking Behind the Work

Why We Build

What this project believes, why those beliefs shape every page, and where our own argument breaks down.

Abstraction is computing's superpower. Each layer lets the next generation build higher without understanding what's below. But that convenience has a cost: every layer becomes an opaque black box to the people standing on it. This series tries to open those boxes. Not to make everyone a programmer, but to help people feel literate in the systems that shape their lives.

Understanding where computing came from — the struggles, the false starts, the breakthroughs that changed everything — is how you develop the judgment to steer where it goes next.

But understanding the past isn't enough. We also try to imagine the future. To draw it. To make specific futures vivid enough that someone looks at one and thinks: I'd like to help build that. We know that choosing which futures to make vivid is itself an act of persuasion, and we try to take that seriously.

The problem we see

There is a generation that has every reason to be skeptical of technology. They see tech companies that put profit above wellbeing. They see politics that feels broken. They see a world that promises innovation and delivers surveillance, addiction, and inequality. The loudest voices tend toward extremes — existential doom or breathless hype. The people doing careful, honest work — researchers, educators, engineers, ethicists — are harder to hear over the noise.

This creates a cycle: doom leads to apathy, apathy leads to disengagement, and disengagement becomes a self-fulfilling prophecy. If the people who care most about the future give up on shaping it, only the people who care least about consequences will be left building it.

We think the antidote to doom isn't optimism. It's agency. Not the loud, disruptive kind. The quiet kind — the feeling that you could make something that helps, even if it's small, even if it's imperfect. Every breakthrough in this series came from someone who simply… started.

In many ways, the work of a critic is easy. We risk very little yet enjoy a position over those who offer up their work and their selves to our judgment. We thrive on negative criticism, which is fun to write and to read. But the bitter truth we critics must face is that, in the grand scheme of things, the average piece of junk is probably more meaningful than our criticism designating it so. But there are times when a critic truly risks something, and that is in the discovery and defense of the new. The world is often unkind to new talent, new creations. The new needs friends. Anton Ego, Ratatouille

We think about this quote a lot. It's easy to be skeptical of AI — and much of that skepticism is earned. But skepticism alone doesn't build anything. It doesn't teach anyone. It doesn't imagine a future worth wanting. Somewhere between uncritical hype and reflexive dismissal, there's a quieter position: this is new, it's imperfect, and it needs people who care enough to shape it well.

What we believe

They were four guys who kept each other's negative tendencies in check; they balanced each other. And the total was greater than the sum of the parts. Steve Jobs, on the Beatles

Jobs was describing four humans. We're borrowing the idea for something different: human + AI collaboration. The analogy is imperfect — an AI isn't a creative peer the way Lennon was to McCartney. But the core idea resonates: collaboration that produces something neither party could alone, where each compensates for the other's weaknesses. This project is an experiment in that idea. One person working with AI agents, trying to make something that would have been impossible alone. The result is imperfect — but it exists.

Jobs also said Apple stood at the intersection of technology and the liberal arts. Computing isn't just engineering — it's philosophy, linguistics, art, ethics, storytelling. The best builders combine technical skill with human wisdom. The series embodies that intersection, or at least aspires to.

And we're drawn to what Ezra Klein and Derek Thompson describe in Abundance: that the future we want requires building more of what we need. Not instead of critique — alongside it. We hope to encourage people who look at a problem and think: I could build something that helps. Criticism, observation, and reflection are part of building well. The goal isn't mindless making — it's thoughtful making.

On jobs, fear, and who benefits

The most emotionally charged question about AI is about jobs. And the fear is real — for specific people, in specific industries, right now. Dismissing that fear sounds exactly like what people resent: comfortable elites telling uncomfortable workers that progress is good for them.

We should be honest: we're writing this from a position of access. We have the tools, the skills, and the time to explore AI creatively. That's a privilege, and it means our perspective on job displacement is necessarily incomplete. We don't experience the fear the way a factory worker or a junior copywriter does.

What we can offer is historical pattern. The series tells the story of displacement and transformation five times. Human computers — many of them women, especially during and after the Second World War — literally had the job title "computer." The machine didn't just eliminate their jobs. Many became the first programmers. Typists gave way to PCs, but the secretary became the office manager with a spreadsheet. Travel agents lost to the web, but every small business gained global reach.

The pattern isn't "jobs disappear and new ones appear." The pattern is: the tool goes from specialists to everyone, and when everyone has it, they do things nobody predicted. The PC didn't stay in IBM's mainframe room. The internet didn't stay in DARPA's lab. AI is following the same arc — and faster. A kid in a small town has the same tools as a PhD at Stanford.

We're not naive about this. The pattern also includes real pain during transitions — people who couldn't retrain, industries that collapsed faster than replacements emerged, communities that were left behind. Saying "it worked out eventually" is cold comfort to someone living through the disruption. The series tries to be honest about both sides: the cost and the transformation.

The question we hope readers carry away isn't "will AI take my job?" It's "what could I do if I had an AI?" It's a small shift in framing, but it changes what feels possible.

On gratitude

For all the real problems in the world — and there are many — it's worth pausing on what we have. Someone in 1950 would have been staggered by what we casually ignore: a library in your pocket, medical imaging that catches cancers earlier, the ability to talk face to face with someone on the other side of the planet for free.

We don't think gratitude comes from being told "be grateful." It comes from walking through the struggle — researchers spending careers in card catalogs, room-sized computers with less power than a watch, decades of work to connect two machines across a room. By the time readers arrive at today, the appreciation isn't instructed. It's felt. The contrast does the work.

And computing is not where building started. It's the latest chapter in humanity's oldest story — the same impulse that led someone to scratch a bison on a cave wall forty thousand years ago. The tradition of making tools that extend what we can do. We hope readers feel part of that lineage, not separated from it.

On building's shadow side

We should say plainly what the rest of this page might gloss over: building is not inherently virtuous. The surveillance systems that erode privacy were built. The algorithmic feeds that optimize for addiction were built. The predatory lending platforms that targeted vulnerable communities were built. Every one of them was someone's startup, someone's "disruption," someone's proof of concept.

So when we talk about builders, we mean something specific: people who ask what they're building and for whom. Not just "can I build this?" but "should I?" The series celebrates people who answered both questions well — Berners-Lee, who gave the web away free; Grace Hopper, who spent decades making computing accessible to non-specialists; Turing, who laid the theoretical foundation that made everything possible. They weren't saints. But they built for others, not just for themselves.

The series also covers computing's darker chapters — and we know our telling has blind spots. Ninety years of history told in ten issues inevitably compresses, simplifies, and leaves people out. The contributions of Black mathematicians, South Asian and East Asian engineers, and perspectives from the Global South deserve more space than we give them. We're trying to do better, and we know we haven't arrived.

Where this breaks down

This page argues for showing over telling — and then it tells you things for several thousand words. The comic is supposed to make its case through stories, not essays. This page is the essay behind the stories, and we're aware of the tension.

We write about the dangers of comfortable elites dismissing job fears, and then we — people with access to AI tools and the skills to use them — offer historical patterns as reassurance. We know that's exactly the move we criticized. We don't have a clean resolution for this, except to keep asking whether our telling is honest enough.

We frame building as the answer, but not everyone's circumstances allow them to build. People dealing with burnout, caregiving, disability, poverty, or simply different priorities aren't failing by choosing not to make things. The invitation to build is meant to expand possibility, not to judge anyone who doesn't take it up.

We chose to make these tensions visible rather than pretend they don't exist. The project is a work in progress. So is the thinking behind it.

Our principles

  1. Show, don't argue. The case for building is made by showing a specific person solving a specific problem. Trust the reader to draw their own conclusion.
  2. Acknowledge real costs, earn the right to hope. Every computing breakthrough displaced someone. Being honest about the transition is what earns credibility to show what came next.
  3. The tool always goes to everyone. Every wave of computing started as an elite tool and became everyone's tool. Mainframes to PCs to smartphones to AI. This is the pattern of the series.
  4. Ground futures in specific, relatable people. Not elites. A mechanic diagnosing a car. A baker optimizing recipes. A kid in a rural town building an app. The more specific and non-elite, the more powerful.
  5. Ask "what could you build?" not "what will AI do?" Frame AI as a tool that unlocks capability. The question is always about the reader, never about the technology in isolation.
  6. Liberal arts + technology. The best builders combine technical skill with human wisdom. Computing is philosophy, linguistics, art, ethics, and storytelling.
  7. Build for the greater good — and ask what "good" means. Celebrate those who built for others. But also ask: who benefits? Who's left out? Building without that question is how you get surveillance capitalism.
  8. Never partisan, always embody values. Nobody argues about whether the transistor was good. Embody curiosity, generosity, and truth-seeking through characters and their choices. When a topic is politically charged, show what people did and let the reader judge.