Nautilus

Nature’s special issue on ‘big data’

The Big Data special package of articles in this week’s issue of Nature (4 September 2008) looks at how massive influxes of data are changing the way science is done in many fields, and includes a feature story on ‘Wikiomics’ that might be of particular interest to the scientists who work with “web 2.0” tools. Coping with floods of data is now one of science’s biggest challenges, so the Nature special issue assess the need to complement smart science with smart searching; looks at what the next Google will be; interviews the pioneering biologists who are trying to use wiki-type web pages to manage and interpret data; and recalls that the first mass data crunchers were not computers, but the remarkable women of Harvard’s Observatory. All the articles, as well as downloadable PDFs of the print versions, are free online for two weeks from the publication date. We encourage you to download everything you are interested in—and then to spread the word to friends and colleagues about what you like (and don’t like!) via email, blog, by commenting online at the Nature website, or other means. And of course, Nature always welcomes Correspondence submissions.

The contents of the Big Data ‘special’ in full:

Editorial: Community cleverness required

Researchers need to adapt their institutions and practices in response to torrents of new data — and need to complement smart science with smart searching.

Special Report: The next Google

Ten years ago this month, Google’s first employee turned up at the garage where the search engine was originally housed. What technology at a similar early stage today will have changed our world as much by 2018? Nature asked some researchers and business people to speculate — or lay out their wares. Their responses are wide ranging, but one common theme emerges: the integration of the worlds of matter and information, whether it be by the blurring of boundaries between online and real environments, touchy-feely feedback from a phone or chromosomes tucked away on databases.

Party of One column: Data wrangling

Collecting and releasing environmental data have stirred up controversy in Washington, says David Goldston, and will continue to do so.

Features: Welcome to the petacentre

What does it take to store bytes by the tens of thousands of trillions? Cory Doctorow meets the people and machines for which it’s all in a day’s work.

Features: Wikiomics

Pioneering biologists are trying to use wiki-type web pages to manage and interpret data, reports Mitch Waldrop. But will the wider research community go along with the experiment?

Commentary: How do your data grow?

Scientists need to ensure that their results will be managed for the long haul. Maintaining data takes big organization, says Clifford Lynch.

Books & Arts: Distilling meaning from data

Buried in vast streams of data are clues to new science. But we may need to craft new lenses to see them, explain Felice Frankel and Rosalind Reid.

Essay: The Harvard computers

The first mass data crunchers were people, not machines. Sue Nelson looks at the discoveries and legacy of the remarkable women of Harvard’s Observatory.

Review: The future of biocuration

To thrive, the field that links biologists and their data urgently needs structure, recognition and support. Doug Howe, Maria Costanzo, Petra Fey, Takashi Gojobori, Linda Hannick, Winston Hide, David P. Hill, Renate Kania, Mary Schaeffer, Susan St Pierre, Simon Twigger, Owen White & Seung Yon Rhee

Podcast Extra: Big Data

As Google celebrates its 10th anniversary, we find out how science is coping with massive datasets generated by unprecedented computing power. BoingBoing blogger Cory Doctorow tells us about his visits to the LHC data storage facility and the genome sequencing Sanger Centre.

Comments

Comments are closed.