Posted on behalf of George Wigmore.
Tomorrow marks the 20th anniversary of the first website going live at CERN. Born out the dreams of Libertarian hippies in 1970s California, and the anti-authoritarian aspirations of internet-pioneers such as Ted Nelson, who coined the term hypertext, from its early incarnation at CERN, the World Wide Web and the way we send data has changed phenomenally.
The websites we have today with their complicated scripts and banners, bear little resemblance to the first humble website, which was launched by Tim Berners-Lee and colleagues at CERN on the 6 August, 1991.
The web grew out of Berners-Lee’s original 1989 proposal for an information management system which would allow people at different sites to share their ideas and all aspects of a project. In 1990 the idea was accepted, and Berners-Lee’s vision of taking the idea of hypertext, and turning it into a working computer language, HTML, became a reality.
Running on a NeXT computer at CERN, the first web page isn’t much to look at, simply listing information about the WWW project, from the names of collaborators, to information about hypertext, and how to build websites. Looking at it now, it pretty much resembles the half-finished site of a student learning HTML. But this only serves to show how far we’ve come.
While there are sadly no screenshots of the original page, a copy of a later version, from 1992, can be found on the World Wide Web consortium page. But the original domain (info.cern.ch) does still exist, serving as a historical reminder of the significance of this address in the web’s development.
The story of Tim Berners-Lee and the creation of the web is well-known. Yet as the sheer quantity of data produced increases, the way we treat and process it has had to change with the times. One place that has had to address this issue is conveniently the place where it all began: CERN.
With the launch of the Large Hadron Collider, CERN had a problem. Unless it could manage the huge swathes of data it was about the generate, it would essential be useless. The quantities of data are phenomenal: by the end of 2012, ATLAS and the other detectors at the LHC aim to produce a total of 50 petabytes (50 x10^15 bytes). The solution to distributing this incredible amount of data is the Worldwide LHC Computing Grid (read Nature‘s feature on the grid here), an ingenious piece of open-source software, without which, LHC would never have left the ground. By splitting up the data, and distributing the computing effort between 35 countries, the Grid makes it possible to sift through these vast amounts of data.
In many ways, the ideas of scientific spirit and enterprise which were embedded in the web along with its open-source, patent-free nature, are still alive and well. And they’ve been taken further than we could have ever imagined with the Grid, hopefully solving some of the most fundamental questions in physics.
Image: The first web server / Wikimedia Commons