← Previous Issue: Teaching Machines to Understand Us
It was 1969. Humans had just walked on the moon. But back on Earth, the world of computing was fractured.
Every operating system was hand-stitched for one machine, like a suit that only fits one person. Write a program on an IBM mainframe, and it could never run on a DEC minicomputer. Write it on a PDP-7, and it was trapped there forever.
Software was powerful — but it was chained to the hardware it was born on.
That was about to change. In a quiet lab in New Jersey, two programmers were about to build something small, elegant, and radical. An idea — a philosophy that would reshape all of computing. They called it Unix.
Where did this revolution begin? Not in a Silicon Valley garage. In a telephone company’s research lab — the most productive square footage in the history of science.
Bell Telephone Laboratories was the research arm of AT&T, the American telephone monopoly. It operated from 1925 onward, and its output was staggering.
The transistor? Bell Labs, 1947. Information theory — the mathematical foundation of all digital communication? Claude Shannon at Bell Labs, 1948. The laser? Bell Labs. Satellite communications? Bell Labs.
AT&T was a regulated monopoly. It made so much money from telephone service that it could afford to fund pure research with no expectation of immediate profit. Bell Labs management had a radical philosophy: hire brilliant people, give them freedom, and trust that useful things will emerge.
Lorinda Cherry, a programmer in the mathematics group, built critical text-processing tools (including deroff and parts of eqn) that helped Unix spread through Bell Labs’ patent department — one of the earliest examples of Unix proving its value in real work.
In the late 1960s, a corner of this extraordinary environment — Department 1127, the Computing Techniques Research group led by Doug McIlroy — became the birthplace of Unix.
Two of those brilliant people were about to do something nobody asked them to do. One of them had just gotten three weeks of uninterrupted time — because his wife took the baby to visit her parents.
Ken Thompson (born 1943, New Orleans) joined Bell Labs in 1966 with degrees from UC Berkeley. He was a quiet, unassuming hacker — the kind of programmer who would think about a problem for days, then write the solution in a single explosive burst of coding. He was also a passionate chess player who later built a computer that achieved Master-level rating.
Dennis Ritchie (1941–2011, Bronxville, New York) joined Bell Labs in 1967 with a degree in physics and applied mathematics from Harvard. Where Thompson was the rapid-fire builder, Ritchie was the deliberate designer — modest, self-effacing, and someone who expressed himself more clearly in writing than in speech.
Both had worked on Multics, an ambitious time-sharing operating system built jointly by Bell Labs, MIT, and General Electric. But Multics was over-engineered and behind schedule. Bell Labs pulled out in early 1969.
Then fate intervened. In the summer of 1969, Thompson’s wife Bonnie took their infant son to visit her parents in California. Suddenly, Thompson had uninterrupted time and a barely-used PDP-7 minicomputer sitting in the lab.
In roughly three weeks, he wrote an operating system kernel, a shell, an editor, and an assembler. He later described the allocation as “one week, one week, one week.” The system — originally called Unics (a pun on Multics, widely credited to Brian Kernighan) and later respelled Unix — would go on to underpin essentially all of modern computing.
Thompson had built a working system. But it was written in assembly language — locked to one machine. To truly set software free, they needed a new kind of language. And they needed a philosophy.
Doug McIlroy, the head of the department where Unix was born, had been thinking about an idea since the early 1960s. In a 1964 memo, he wrote: “We should have some ways of coupling programs like garden hoses — screw in another segment when it becomes necessary to massage data in another way.”
For nearly a decade, the idea sat waiting. Then in 1973, Thompson implemented it. The pipe — represented by the | symbol — allowed the output of one program to flow directly into the input of another.
The Unix philosophy crystallized around this mechanism. McIlroy articulated it most famously:
Unix had a philosophy. It had pipes. But it was still written in assembly — still chained to one machine. To break that chain, Dennis Ritchie was building something: a new programming language named after a letter of the alphabet.
Thompson’s B language (1969–70), derived from Martin Richards’s BCPL (Cambridge, 1966), worked on the word-addressed PDP-7. But when Unix moved to the byte-addressed PDP-11, B’s lack of a type system became a problem.
Between 1971 and 1973, Dennis Ritchie transformed B into something new. He added a type system, structures, and direct compilation to machine code. The result was C — named simply because it came after B.
C occupied a unique position. It was high-level enough that humans could read it. But low-level enough to replace assembly language. It gave you pointers — direct addresses into memory, like GPS coordinates to specific locations in the machine. Powerful, yes. But also dangerous.
Then came the breakthrough. In 1973, Thompson and Ritchie rewrote Unix in C. For the first time in history, a production operating system was written in a high-level language. Portability: write once, compile anywhere.
Unix and C had set software free from specific hardware. But computing itself was still locked away in universities and corporations. To reach ordinary people, computers needed to escape the lab — and fit on a desk.
The January 1975 cover of Popular Electronics featured the MITS Altair 8800 — a kit computer based on the Intel 8080 processor, sold for $439. It had no keyboard, no screen, and was programmed by flipping toggle switches. It was nearly useless.
It changed everything.
The Altair ignited a hobbyist firestorm. At the Homebrew Computer Club in Menlo Park, California, engineers and enthusiasts gathered to share designs and swap ideas.
Bill Gates and Paul Allen wrote a BASIC interpreter for the Altair on a Harvard PDP-10, using a homemade emulator — they had never actually touched an Altair. When Allen flew to Albuquerque to demonstrate it, the software ran on real hardware for the first time. It worked. They founded Microsoft in April 1975.
Steve Wozniak designed the Apple I single-handedly and gave away the schematics. His friend Steve Jobs convinced him the designs could be sold. They founded Apple Computer on April 1, 1976.
The Apple II (1977) — with the VisiCalc spreadsheet (1979), the first “killer app” — made personal computing real for businesses.
In February 1976, Gates wrote his famous “Open Letter to Hobbyists,” complaining that hobbyists were copying Altair BASIC without paying. He argued that software was intellectual property deserving compensation. The debate between free sharing and paid software had begun — and it is still going today.
Then IBM arrived. On August 12, 1981, the IBM PC launched with an open architecture and off-the-shelf components. Microsoft supplied MS-DOS — which it had not written itself, but purchased from a small company called Seattle Computer Products and adapted. Crucially, Microsoft retained the right to sell DOS to other manufacturers. This single licensing decision would make Microsoft one of the most valuable companies on Earth.
Computers were personal now. But they were islands — each one isolated, unable to talk to any other. Quietly, in the background, a network had been growing since 1969. And its first message was an accident.
On October 29, 1969, a UCLA graduate student named Charley Kline sat at a terminal and tried to send the word “LOGIN” to a computer at Stanford Research Institute, 350 miles away. He typed “L.” Confirmed. “O.” Confirmed. “G.” The system crashed. The first message sent over ARPANET was “LO.”
A common myth says ARPANET was built to survive nuclear war. It was not. It was built to let researchers share expensive computing resources remotely. But its underlying technology — packet switching, which breaks messages into small pieces that travel independently and reassemble at their destination — did give it inherent resilience.
Vint Cerf (hearing-impaired since childhood, which fueled his early interest in text-based electronic communication) and Bob Kahn solved the problem of incompatible networks. Meanwhile, Elizabeth “Jake” Feinler ran the Network Information Center at SRI, managing ARPANET’s first directory and host naming system — essential infrastructure that made the growing network navigable. In May 1974, they published the design for TCP/IP — a universal set of rules for breaking messages into packets, addressing them, routing them through any network, and reassembling them.
On January 1, 1983 — “flag day” — every computer on ARPANET switched to TCP/IP simultaneously. Many historians call this the true birth of the Internet.
Software was portable. Computers were personal. Networks were growing. But a new threat was emerging: software was becoming a product — locked behind licenses, hidden from the people who used it. One programmer at MIT decided this was a moral crisis.
Richard Stallman (born 1953, New York) had been a programmer at MIT’s Artificial Intelligence Laboratory since 1971. In the AI Lab of the 1970s, software was freely shared. If a program had a bug, anyone could fix it.
Then, around 1980, that culture began to erode. Stallman experienced this through a specific, infuriating incident: a Xerox printer that jammed constantly. He had modified the old printer’s software to notify users. When a newer printer arrived, Xerox refused to share the source code. A professor at Carnegie Mellon who had the code also refused — he had signed a non-disclosure agreement. “It was my first direct encounter with a nondisclosure agreement,” Stallman later said, “and it taught me that nondisclosure agreements have victims.” It was not just a corporation being proprietary. It was the collapse of a community.
On September 27, 1983, Stallman announced the GNU Project (GNU’s Not Unix). His goal: build a complete, free operating system compatible with Unix.
In 1985, he published the GNU Manifesto and founded the Free Software Foundation. He defined software freedom through four freedoms:
In 1989, the GNU General Public License (GPL) introduced copyleft — a legal mechanism that requires any modified version of free software to remain free. It used copyright law to achieve the opposite of copyright’s typical purpose.
By the early 1990s, the GNU project had produced essential tools — the GCC compiler, the Emacs editor, the bash shell — but lacked a kernel. That missing piece would arrive in 1991, from a Finnish college student named Linus Torvalds. But that is a story for Issue 5.
Software was being set free. But there was another kind of freedom still missing. For most people, using a computer still meant memorizing obscure commands. What if, instead, you could just point at what you wanted?
Xerox PARC (Palo Alto Research Center), established in 1970, was building the future and barely knew it. By the mid-1970s, PARC researchers had invented the graphical user interface, the desktop metaphor, Ethernet, laser printing, and WYSIWYG text editing. Their Alto computer (1973) integrated all of these innovations.
Xerox tried to commercialize its own research with the Xerox Star (1981), priced at $16,595 — roughly $55,000 in today’s money. It was a commercial failure.
In December 1979, Steve Jobs visited Xerox PARC. Adele Goldberg, a key developer of the Smalltalk programming environment at PARC, was among those present; she reportedly opposed showing Apple the technology. PARC researcher Larry Tesler demonstrated the Alto’s graphical interface. Jobs was electrified. Tesler later recalled that Jobs kept asking why Xerox was not doing anything with such revolutionary technology.
The result was the Apple Macintosh, launched on January 24, 1984, accompanied by the legendary “1984” television commercial directed by Ridley Scott. But Apple did not merely copy PARC — the Mac team added the menu bar, drag-and-drop, and a single-button mouse designed for first-time users.
Microsoft followed with Windows 1.0 in November 1985 — slow and limited. It was not until Windows 3.0 (1990) that Microsoft’s GUI became commercially successful.
The lineage is clear: Engelbart → Xerox PARC → Apple → Microsoft. No single entity “invented” the GUI. Each built on what came before.
Step back and look at where we are. In just two decades, software became portable. Computers became personal. Networks began connecting them. Interfaces became visual. And a movement declared that software should be free. But there is one enormous problem still unsolved...
By the end of the 1980s, the computing landscape had been utterly transformed.
Unix and C had proven that software did not have to be trapped on one machine. The Unix philosophy — small tools, piped together, doing one thing well — had shown that the most powerful systems are built from simple, composable parts.
Personal computers had moved computing from the institution to the individual.
The GUI had opened the door for everyone, not just programmers and engineers.
Richard Stallman and the free software movement had declared that the code running on those machines should belong to everyone.
And in the background, ARPANET had quietly evolved into something with the potential to connect all of it.
But in 1989, the vast majority of personal computers were still isolated. Sharing anything meant copying it to a floppy disk and physically carrying it across the room.
It would take a physicist at a European research lab, trying to solve a very mundane problem — sharing documents with his colleagues — to connect the final wire.