Exactly a week ago I was on my way to O’Reilly’s Foo Camp in Sebastopol, so it’s about time I posted some notes about a few of the amazing people I met during and since that event:
- Beth Noveck told me that Peer to Patent, a wonderful project that she and others have been working on for many months, is going really well in it early days post launch. The aim is to open up patent applications to public scrutiny, which sounds like an excellent idea to me. Brady has a write-up about it on Radar.
- Manolis Kelaidis, fresh from his triumph at the TOC conference demoed bLink. I first met Manolis last September, when he came to present his invention at Macmillan (Nature’s parent company). He’s a very creative and thoughtful (but self-effacing) guy, and I’m delighted that bLink is getting the attention it deserves.
- Chris DiBona, who among many other things runs the team at Google that puts together SciFoo, was proudly showing off his iLiad, especially the Linux shell function. (Hey, this was Foo Camp.)
- Some of the science-oriented attendees — Drew Endy and Saul Griffith spring to mind — were kind enough to rave about Nature Precedings. Even at Camp, Drew was busy working away on his latest amazing project, the BioBricks Foundation. He also had a test tube of E. coli that smelled of bananas. (Hey, he is a synthetic biologist.)
- Speaking of science, I also met Marti Hearst from UC Berkeley, who’s working on BioText, figure-based searching for the scientific literature — great!
- There were plenty of other amazing people, of course, not least the O’Reilly crowd themselves, especially Tim O’R and Sara Winge, the people who make Foo Camp happen. (And SciFoo too.)
I also bumped into Ray Ozzie and told him about the great discussions we’ve been having with Tony Hey‘s group at Microsoft (known as ’Technical Computing’, they run MS’s collaborations and interactions with scientists and engineers). It was an extra delight, then, to meet Savas Parastatidis on London on my return. I can’t remember the last time I came across someone outside Nature with whom I shared so many interests and opinions. We’ll definitely be doing some cool stuff together, and we’ll keep you posted.
Last and least, I have an article in this month’s STM News (a periodical for science, technology and medical publishers). The full publication is members only, so my draft is reproduced below for anyone who’s interested. Appropriately enough, it’s basically a summary of the ways in which the O’Reilly alpha-geek crowd has influenced our activities at Nature.
The Web Opportunity
The web is the most disruptive influence on publishing since the invention of moveable type in 1450. It is often viewed by publishers as a profound threat (think of Amazon, Google, Wikipedia and craigslist). But as a techno-optimist, I see threats merely as poorly understood or unexploited opportunities. Indeed, I couldn’t imagine working in publishing if it weren’t undergoing a revolution. These are not the worst of times but the best of times: how dull the last 550 years have been by comparison.
Publishers exist to enable the flow of information between people. For those of us in STM, this communication expands the frontiers of human knowledge and gives rise to new technologies like the web itself. It therefore enriches all our lives and is of fundamental importance. And now we have at our disposal the most powerful information dissemination tool in human history. If that doesn’t make you feel excited and empowered then nothing will. This article will focus on three areas that I believe hold great promise for online scientific communication: audio-video content, databases, and social software.
The iPod has become the technological icon of the first decade of the 21st Century. This fact has certainly helped to propel podcasting into the cultural mainstream with unusual speed. The term “podcast” wasn’t even coined until 2004, but within a couple of years it had become part of everyday life for millions of people. Not only did it provide the convenience of time-shifted audio delivered direct to a mobile player, and an almost infinite variety of content to choose from, but with those white cords dangling from their ears even people like me can kid themselves that they look cool.
Yet that’s only part of the story. The real reason for the huge expansion in web-based audio is the dramatically falling cost of the hardware and software required to create professional-sounding audio. Only a decade ago this required investments on the order of tens of thousands of dollars. Now, with a low-end desktop computer, a twenty-dollar microphone and a piece of software costing anywhere between nothing and $350, anyone with a bit of skill and patience can compete with the best in the world. That means dorm-room students and basement bloggers, but it also means established publishing companies with long histories of putting out nothing but written content. Welcome to the era in which you too can be a broadcaster.
Why would you want to? Because audio complements the written word in many compelling ways. Take the Nature Podcast as an example. We began this as a three-month trial in October 2005. By the end of that year it was receiving around 30,000 downloads a week and had secured a major sponsor that enabled us to cover the production costs. It also proved valuable to our listeners and to us. Feedback indicated that researchers liked hearing the author interviews because it gives them an insight into reports from outside their fields that they would never normally read in the journal. It also allows them to connect with these scientists as people, unfiltered by the formal, passive style of research papers. (Needless to say, authors also love being given a platform to talk about their work in front of tens of thousands of fellow researchers.) More prosaically, the show enables researchers them to make more productive use of their time. “Please make your shows longer,” pleaded one listener, “I have a lot of microscope time.” And this hints at one of the benefits to us as the producers: it allows us to connect with scientists and clinicians during times when they could not possibly read one of our journals or browse our websites – for example, when conducting experiments, commuting or exercising. Of course, as well as serving our main constituency of established professional scientists, podcasts also enable us to reach a broader, younger audience than we traditionally encounter. This, in turn, helps to strengthen and rejuvenate our brand.
Video shares similarities with audio, but also has some important differences. There, too, falling hardware and software costs have put the tools of professional-quality production into the hands of almost anyone who cares to use them. But video is not just audio with pictures – it is consumed in very different ways (the main reason that television did not kill radio). In particular, video is not easily consumed while conducting an experiment or driving.
Most obviously, it is useful in conveying certain types of scientific concepts – from displays of animal behaviour to animations of cellular processes (see this amazing example). But perhaps its biggest practical effect in science will be the sharing of experimental protocols not only by detailed step-by-step instructions but also by watching someone else carry them out. Just as cookery benefits from having recipe books and TV chefs, so scientific methods will be more accurately replicated and more quickly refined when written details are supplemented by video. The Journal of Visualized Experiments (JoVE) is an interesting early example of this approach.
The world of online scientific information can be conveniently divided into two largely separate but complementary realms, and only one of them has much to do with publishers. On one hand we have the journals — traditional repositories of received scientific wisdom and now almost all available online at the click of a mouse. On the other we have databases, which in some disciplines have become more important in the everyday information needs of scientists than any traditional publication ever was.
Particularly in biology, ‘cottage industry’ science is giving way to an ‘industrial’ model with greater specialization and global inter-lab collaboration. Whereas before everything — from data collection through analysis to manuscript writing — was done by a small group of people in one lab, now we see different research groups specializing in each stages of the process. Witness, for example, the many genome sequencing initiatives, which are really enormous data acquisition operations, with most of the analysis left to others.
Databases are the conduits that enable such collaborations, and though they often seem not to realise it, publishers have important roles to play. That’s why we at Nature have teamed up with a variety of groups — from NIH-funded initiatives like the Cell Migration Consortium and the Consortium for Functional Glycomics, to major publicly or privately funded organisations like the National Cancer Institute and the Allen Institute for Brain Science — to create a series of joint database-oriented community resources. None of these could have been created by any one organisation on its own. Nature’s contribution varies by project, but typically involves some combination of editorial content (to provide primers and updates on the field), curation and peer-review of the database contents, and promotion of the service to those who might find it useful in their research. These are actually rather traditional publishing roles, albeit applied in a new context.
There is another more subtle reason for journal publishers to be interested in databases: the dividing line between the two realms is getting ever fuzzier, and may eventually disappear altogether. As journals have moved online, they have taken on some of the characteristics of databases (searchable, structured, constantly updated). Meanwhile, some databases are starting to mimic certain aspects of journals (peer-reviewed, archival, citable). This has led to the appearance of ‘hybrid’ publication that are both databases and journals depending on how you look at them. For example, the Molecule Pages, a collaboration between Nature and the University of California at San Diego, is a review journal covering several thousand proteins involved in intracellular signalling. But the information is held in a relational database, making it easy to query the data and represent it in numerous different ways; while being archival and citable, it is also continually update. I predict that we will see more and more such examples in future, because on the web we really can have the best of both worlds.
The web – and the Internet that underlies it – is not a traditional distribution channel, it’s a many-to-many network. Most of the biggest online success stories — eBay, Wikipedia, craigslist — have harnessed this fact to create substantial communities and apparently unstoppable momentum. In short, they have used the network to create network effects of their own.
In some cases, even companies that on the face of it are ill-positioned to exploit this approach have seen it pay dividends. For example, Amazon started off as nothing more than a book catalogue and a piece of (very good) e-commerce software. But by adding user reviews, wishlists and other collaborative features — and using this information to serve their users better — they created what every provider of a website ought to be striving for: a service that gets better for everyone the more people use it. This
concept is often considered to be at the core of the “Web 2.0” concept articulated by Tim O’Reilly. Similarly, Google overtook and dominated their rivals in search by turning the process of ranking into a massively collaborative one: suddenly anyone creating a link on the open web became a contributor.
The scientific web is replete with such opportunities (see figure below). These including blogs (e.g., ScienceBlogs), wikis (e.g., OpenWetWare), voting systems (e.g., DissectMedicine), file sharing (e.g., arXiv and Nature Precedings), social bookmarking (e.g., Connotea), social networks (e.g., Nature Network) and markets (e.g., InnoCentive), among others. It also includes virtual worlds, most notably Second Life, which, although at an early stage in their evolution, hold the same disruptive potential that the web had in the mid-1990s.
The idea that everyone can now do their own publishing, making publishers superfluous, is misguided. But publishers do need to adapt. Online communities don’t just happen, they require initiators, motivators, organisers, moderators, summarisers and guides. They also need trust systems based on user identification and reputation. In many ways, these, too, are traditional publishing roles, but they require new skills. Writers and editors now need to double as moderators and hosts. Publishers need to become adept at mitigating gaming and spamming of their systems, and at monetizing web traffic rather than selling subscriptions. On top of that, they need to become better at cooperating — with each other and with other organisations outside the industry. This particularly applies to online interoperability (even horror of horrors, with competitors), which is a positive-sum game that can benefit all participants. CrossRef has blazed a trail in this area, and we should build in its success.
Above all, publishers need to be leading the online charge, not following the scientists we serve. We are the information dissemination experts, so if we aren’t pushing the boundaries and testing what’s possible in this new world then we’re not merely missing out, we’re also not doing our jobs. Cynics will point out that most apparent ‘opportunities’ are a long way from turning a profit, and many probably never will. They’re right. Do any of the STM projects I’ve mentioned above make a lot of money? No. But are they representative of the future of scientific communication, and do they provide a platform on which to build information businesses of the future? You’d better believe it.