In 1989, the internet already existed. It had been around since the 1970s — a network of networks, built on TCP/IP, connecting universities and research labs. You could send email. You could transfer files. But finding information on it was like searching a library where none of the books had covers, the shelves had no labels, and the catalog was in a language you didn’t speak.
The internet was a highway system with no signs, no maps, and no addresses.
Then a British physicist at a Swiss research lab wrote a memo. His boss scribbled “Vague but exciting...” on the cover page. And within five years, the world would be unrecognizable.
This is the story of the 1990s and 2000s — the era when computers stopped being islands and became a single, connected world. An era built not by corporations or governments, but by people who believed the most powerful thing you can do with an idea is give it away.
Who was this physicist? And what exactly did he build on that black cube?
Three Inventions That Changed Everything
Tim Berners-Lee was not trying to build a publishing empire or a social network. He had a practical problem: CERN employed over 10,000 researchers, using dozens of different computer systems, and they could not easily share documents. Information was trapped in silos.
On March 12, 1989, Berners-Lee submitted a proposal titled “Information Management: A Proposal.” By late 1990, working on a NeXT computer, he had created three inventions that became the foundation of the web:
The World Wide Web is not the internet. The internet is the highway system — the physical network of cables and protocols (TCP/IP) built in the 1970s and 1980s. The web is what was built on top of it: a system of linked pages, accessible through browsers, using HTML, HTTP, and URLs. The internet carries the data. The web makes it readable.
Berners-Lee built the first website and the first browser. But the web was still a tool for physicists. What turned it into something everyone could use?
The Browser That Opened the Floodgates
The first web browsers showed text and hyperlinks. Images opened in separate windows. Then in January 1993, Marc Andreessen and Eric Bina at the NCSA released Mosaic — the first popular browser to seamlessly display images alongside text.
On April 30, 1993, CERN released the World Wide Web software into the public domain — free, forever. Berners-Lee later reflected: “Had the technology been proprietary, and in my total control, it would probably not have taken off.” In 1994, Berners-Lee founded the World Wide Web Consortium (W3C) at MIT to protect the web’s openness through shared standards.
Andreessen co-founded Netscape in 1994. On August 9, 1995, Netscape went public. Shares priced at $28 opened at $71. A 16-month-old company was worth $2.9 billion. The dot-com boom had begun.
Think About It: Tim Berners-Lee gave away the web for free. He could have patented it, licensed it, controlled it. He chose not to. Do you think the web would have become what it is today if CERN had charged licensing fees?
The web was free. But the software underneath was still controlled by corporations. Then a 21-year-old Finnish student posted: “just a hobby, won’t be big...”
Just a Hobby
Linus Torvalds was born December 28, 1969, in Helsinki, Finland. In January 1991, he bought a 386 PC and started tinkering with MINIX. Frustrated by its limitations, he started writing his own operating system kernel.
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Date: 25 Aug 91 20:57:08 GMT
Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby, won't
be big and professional like gnu) for 386(486) AT clones.
This has been brewing since april, and is starting to get
ready.
— Linus Torvalds, August 25, 1991
“Just a hobby.” “Won't be big and professional.”
Linux version 0.01 was released September 17, 1991. Developers around the world started contributing. Torvalds had originally released Linux under his own license that prohibited commercial use. In February 1992, he switched to the GNU General Public License (GPL) — created by Richard Stallman’s Free Software movement, which had been building the tools (compiler, shell, utilities) that made Linux usable as a complete system since 1983 (Issue 4). The GPL guaranteed Linux would remain free and open forever.
The name “Linux” was not Torvalds's idea. He called it “Freax.” Ari Lemmke, who maintained the FTP server, unilaterally named the directory “linux.”
Linux proved that a global community of volunteers, coordinating over the internet, could build software that rivaled — and eventually surpassed — anything produced by the wealthiest corporations on Earth. It was a new way of working.
One student wrote a kernel. Thousands of strangers improved it. But why would people work for free? A hacker and essayist was about to explain.
The Cathedral and the Bazaar
In May 1997, Eric S. Raymond presented “The Cathedral and the Bazaar” at the Linux Kongress — the foundational text of the open-source movement.
His most famous line: “Given enough eyeballs, all bugs are shallow.” He called this “Linus's Law.” If thousands can see the code, every bug is obvious to someone.
In January 1998, Netscape released its source code — directly inspired by Raymond’s essay. Mitchell Baker led the Mozilla project that grew from Netscape’s code, eventually producing Firefox. In February 1998, Raymond and Bruce Perens founded the Open Source Initiative (OSI).
Not everyone was pleased. Richard Stallman, whose Free Software Foundation had championed “free as in freedom” since the 1980s (Issue 4), saw “open source” as a dilution of the freedom principles. Stallman insisted the point was user rights, not just better engineering. That philosophical split persists to this day.
Open source is more than a license — it is a philosophy of development. The radical claim: transparency and collaboration produce better software than secrecy and control. A community garden, tended by thousands, feeds more people than a private farm.
Think About It: Open source means the “recipe” is published, not just the “meal.” What if medicine, architecture, or education worked the same way? Where would openness help, and where might it cause problems?
The web was free. The code was open. People were connecting. And then capitalism noticed.
Irrational Exuberance
Netscape’s explosive IPO sent a signal to Wall Street: the internet was where the money was. On December 5, 1996, Federal Reserve Chairman Alan Greenspan warned of “irrational exuberance” in the markets. Nobody listened. Venture capital flooded into any startup with “.com” in its name. Business plans were optional. Revenue was irrelevant.
Pets.com spent $11.8 million on advertising while generating only $619,000 in revenue. It lasted 268 days from IPO to liquidation. Webvan raised $375 million and went bankrupt. Kozmo.com burned through $280 million delivering snacks by bicycle.
When the bubble burst in March 2000, the NASDAQ fell 78%. Even Amazon’s stock dropped 93% — from $107 to $7 — but the company survived and eventually became one of the most valuable in the world.
The dot-com crash taught a brutal lesson: technology that changes the world and companies that make money are two different things. The infrastructure survived. The speculation did not. And the companies that emerged from the wreckage would reshape civilization.
Two Stanford PhD students had been watching the chaos from a garage in Menlo Park. They had no business plan and a server made of Lego bricks. But they had something nobody else did: a better way to find things.
Organizing the World's Information
Larry Page and Sergey Brin met as Stanford PhD students in 1995. Brin was born in Moscow — his family emigrated from the Soviet Union when he was six, an experience that shaped his views on information freedom. Their research project, originally called “BackRub,” produced PageRank — a ranking algorithm that treated every link as a vote of confidence.
Andy Bechtolsheim, co-founder of Sun Microsystems, wrote a check for $100,000 to “Google Inc.” — a company that did not yet legally exist. The name was a play on “googol” — the number 1 followed by 100 zeros.
Google was incorporated September 4, 1998, in a garage in Menlo Park. The first server was housed in a case made of Lego bricks. Its mission: “to organize the world's information and make it universally accessible and useful.”
Google's breakthrough was not building a search engine — many existed. The breakthrough was realizing that the structure of the web itself contains information about what matters. Links are votes. The web is not just content; it is a vast network of human judgment.
Google organized existing information. But what if you could create an entirely new encyclopedia — one written not by experts behind closed doors, but by everyone?
The Encyclopedia Anyone Can Edit
Jimmy Wales founded Nupedia in March 2000 — a free online encyclopedia with a rigorous seven-step peer review process. In its first year: 21 articles. Then Larry Sanger proposed adding a wiki — a type of website anyone can edit, invented by Ward Cunningham in 1995. On January 15, 2001, Wikipedia launched.
Wikipedia reached 20,000 articles in its first year. By 2003, it had 100,000 English articles. A 2005 Nature study found its accuracy was comparable to Encyclopedia Britannica for scientific topics. Wikipedia remains a nonprofit to this day. Wales is one of the least wealthy people to have created something used by billions.
Wikipedia proved that a crowd of non-experts, given the right tools, could produce a knowledge resource rivaling centuries-old institutions. The economist Friedrich Hayek had argued that useful knowledge is dispersed among individuals and can never be centrally planned — Wikipedia proved him right. It was Raymond’s bazaar model applied to knowledge itself.
The web connected documents. Linux connected developers. Wikipedia connected knowledge. But how do you let thousands of people edit the same code at the same time without chaos?
The Tool That Lets Millions Write Code Together
Git’s design was radical. Unlike older systems (CVS, SVN) where one central server held the official code, Git is distributed. Every developer has a complete copy of the entire repository. You can work offline, branch instantly, and merge at will. Junio Hamano took over as Git’s maintainer in July 2005 and has guided its development ever since — turning Torvalds’s prototype into the production tool the world relies on.
In April 2008, GitHub launched — wrapping Git in a web interface with social features: pull requests, issue tracking, code review, and developer profiles. By 2018, Microsoft acquired GitHub for $7.5 billion. It hosted 28 million developers and 85 million repositories.
Git and GitHub solved the hardest problem in collaborative creation: how to let thousands of people edit the same thing at the same time without chaos. They created the world's largest library of code — open, searchable, forkable. That library would soon become something no one anticipated: a training dataset for artificial intelligence.
Think About It: GitHub turned code contribution into a social activity, complete with profiles and contribution histories. How does making collaboration visible and public change the way people work?
Billions of pages of text. Hundreds of millions of code repositories. The largest collection of human thought ever assembled. What if a machine could learn from it all?
The Ocean of Data
But “connected world” was an aspiration, not a reality for everyone. In 2000, fewer than 7% of the world’s population had internet access. The digital divide — shaped by wealth, geography, and infrastructure — meant that the benefits of openness were unevenly shared. They still are.
The connected world was built by people who chose sharing over hoarding, openness over control. In doing so, they unknowingly created the training data for artificial intelligence. The web, open source, Wikipedia, and GitHub did not just connect humanity. They created the raw material for machines that learn.
Think About It: Every piece of text on the web, every line of code on GitHub, every Wikipedia article was created by a human being. When an AI trains on this data, is it “learning” in the same way a student learns from a textbook? Who “owns” the knowledge that billions of people contributed freely?
Next: for sixty years, humans wrote rules for machines. Then three stubborn researchers proved that machines could discover their own rules — if you gave them enough examples and enough compute. Issue 6: “Machines That Learn”