We’re hiring (May/June 2009 edition)

Interested in a senior position on the growing, hard-working, award-winning Nature.com team? If so, we have two vacancies that you should check out: Head of Online Communities and Assistant Publisher.

Enquiries and CVs to the contact address given in the ads, or to me via my Nature Network page.

Wolfram|Alpha has potential, but I can’t see scientists using it for a while yet

hal9000.jpgWolfram|Alpha should have launched officially by the time you read this, though it has been live since Friday evening. The execution is slick. The different result visualizations are a great idea. It’s loaded up with cool widgets and APIs. Most of the time the servers don’t fall over (despite some glaring security holes). To quote FriendFeeder Iddo Friedberg it’s “a free, somewhat simple interface to Mathematica”. Free for personal, non-commercial use, anyway. If you’ve got any questions about the GDP of Singapore then wolframalpha.com is the place to go.

I think that it’s a very interesting project and that it’s important to bear in mind that as the homepage says:

Today’s Wolfram|Alpha is the first step in an ambitious, long-term project to make all systematic knowledge immediately computable by anyone

(emphasis mine)

WA certainly has lots of potential but was anybody who used it over the weekend not left mildly let down? You’d have thought that we’d all have learned not to believe interweb hype after the Powerset and Cuil launches but even if you took all the pre-launch media guff with a liberal sprinkling of salt it was hard not to expect much from Alpha. A breathless Andrew Johnson suggested that it was “the biggest internet revolution for a generation” in The Independent: “Wolfram Alpha has the potential to become one of the biggest names on the planet”.

Personally I was disappointed because I’d been expecting the wrong thing. I’d assumed that WA was akin to Cyc, which is a computational engine that takes a large manually curated database of “common sense” facts and relations and uses it to infer new knowledge. For example: searching photos for “someone at risk for skin cancer” might return a photo captioned “girl reclining on a beach”. Reclining at the beach implies suntanning and suntanning implies a risk of skin cancer.

A few years back a Paul Allen venture called Project Halo took the engine behind Cyc and taught it facts and rules from chemistry textbooks; it took a lot of time and money but the resulting system had a good go at answering college level chemistry exam questions.

It turns out that WA doesn’t do anything like this. One of the most interesting posts about the system that I’ve read comes from Doug Lenat who perhaps not coincidentally is the founder of Cyc. Lenat was impressed by WA but notes that it’s a different beast altogether:

It does not have an ontology, so what it knows about, say, GDP, or population, or stock price, is no more nor less than the equations that involve that term"… [it’s] able to report the number of cattle in Chicago but not (even a lower bound on) the number of mammals because it doesn’t know taxonomy and reason that way

If a connection isn’t represented by a manually curated equation it isn’t represented at all. Apparently the Mathematica theorem prover is currently turned off as it’s too computationally expensive.

One example of this is: “How old was Obama when Mitterrand was elected president of France?” It can tell you demographic information about Obama, if you ask, and it can tell you information about Mitterrand (including his ruleStartDate), but doesn’t make or execute the plan to calculate a person’s age on a certain date given his birth date, which is what is being asked for in this query.

It might seem harsh to criticize WA for not being what people (OK, I) wanted it to be but bear in mind that Wolfram’s About and FAQ pages suggest that WA is an amazing leap forward that brings “expert level knowledge” to everybody and “implements every known model, method, and algorithm” – it’s not like they were managing expectations particularly well.

Even if the computational inference part is lacking the system is still potentially useful as a well presented structured data almanac – but I’m not convinced that it’s a winner for life sciences data.

Wolfram|Alpha for genetics questions

If I search for “DISC1” I get back information about the human gene (genetics coverage in WA is lacking, despite Stephen Wolfram using a sequence search in the video demo. Only the human genome is available). It tells me the transcripts, reference sequence, the coordinates of DISC1, protein functions and a list of nearby genes.

That data is useless without proper citations, though. What genome assembly release are the gene coordinates on? Are the “nearby genes” nearby on the same assembly, or do they come from a different source? Who and what predicted the transcripts, and what data did they use? Were the protein functions confirmed by work in the lab or just predicted by algorithm (if so, what’s the confidence score)?

The “sources” link at the bottom provides a bunch of high level papers describing different genome databases but doesn’t specifically match these to elements of data on the page: furthermore there’s a disclaimer suggesting that actually the data could be from somewhere else entirely that isn’t listed. Not much help.

What happens with contradictory data? The GDP of North Korea varies depending on who I ask. How does WA – or rather whoever curates that data for WA – decide which version of the answer to show?

I’m also worried about how current the data is. Lenat mentions that:

In a small number of cases, he also connects via API to third party information, but mostly for realtime data such as a current stock price or current temperature. Rather than connecting to and relying on the current or future Semantic Web, Alpha computes its answers primarily from [Wolfram’s] own curated data to the extent possible; [Stephen Wolfram] sees Alpha as the home for almost all the information it needs, and will use to answer users’ queries.

I can see why you wouldn’t want to rely on connections to third party data sources for anything that looks like a search engine; users expect a quick response. But in fast moving scientific fields the systematic knowledge that’s useful to researchers isn’t static like dates of birth or melting points – datapoints get updated, corrected and deleted all the time. Does Wolfram bulk import whole datasets regularly? If I correct an error in a record at the NCBI when will Wolfram pick it up?

Can a monolithic, generalized datastore run by Wolfram staff work as well as smaller specialized databases run by experts? What’s the incentive for the specialized databases to release data to Wolfram in the first place, given that WA will be a commercial product?

(for more science tinged coverage there’s lots of Wolfram|Alpha chatter on Friendfeed, a new room dedicated to collecting life sciences feedback for Wolfram and Deepak has a good blog post out)

Public Interfaces

public-interfaces-menu.jpgWe just added last week a new section under our Librarian Gateway – Public Interfaces.

This is the beginning of a general documentation facility to cover all the various interfaces we are using on the nature.com platform for discovery and linking.

The aim is to consolidate technical documentation on the sometimes bewildering array of acronyms and to provide additional references to sources of information for users. Although this is listed under the Librarian Gateway the technologies listed here will be of interest not only to digital librarians but also to other communities as well.

We have kicked this section off with some well-established routes into nature.com: DOI and OpenURL. We have also added some of the newer means for disclosing metadata through self-describing content: META tags for HTML, and XMP for PDF. And we’ve also provided details on our new OAIPMH service, as well as saying something about our long-standing stable of RSS feeds.

Further suggestions on how to improve these pages regarding what should be added or other changes we could make would be more than welcome.

Google, Obama and God: good. H1N1, Elsevier and Merck: bad.

(see the previous post for background on these tables)

Entities associated with negative emotions

Term Sum score Blogs mentioning entity
H1N1 -7.50 15
Elsevier -4.52 5
Merck -3.79 8
CDC -2.34 7
Dana -2.00 2
Japan -2.00 2
McCaffery -2.00 2
WSJ -2.00 2
Sci -1.98 3
Jacqui Smith -1.54 2
James Corbett -1.51 2
Israel -1.50 2
iPhone -1.33 2
Alzheimer -1.19 2
HIV -1.12 2

H1N1 is the subtype of the “swine flu” influenza virus.

Elsevier and Merck published fake journals in Australia, “Dana” is Dana McCafferty, who tragically died of whooping cough because of the low vaccination rates in New South Wales.

WSJ” is the Wall Street Journal which published a flawed explanation of quantum entanglement that got picked up by physics bloggers. Jacqui Smith is the UK’s home secretary, currently under fire for (amongst other things) keeping the DNA of innocent people on file for six years after their arrest.

James Corbett is the teacher in California who told his students that creationism was “superstitious nonsense” – he was later sued by a student who believed that their first amendment rights had been violated.

The iPhone is in there because of changes to App Store policies that may impact smaller developers.

Positive emotions after the break.

Read more

Sentiment analysis on science blogs

FeelingsLP.jpg We’ve been thinking about new features for Nature.com Blogs recently, after spending a lot of time on the back end doing boring yet vital things like enabling trackbacks for journal articles on nature.com.

One particularly cool new feature (potentially) is sentiment analysis. Nature.com Blogs already performs entity extraction, pulling out all of the names, places and things mentioned in each blog post. We use this to cluster posts about the same topic together in the “stories” section.

Sentiment analysis tries to give emotional context to entities. For example, if I blog:

“I love Biology. It rules, Physics drools”

and Nature.com Blog processes my post then it might store the following metadata alongside it:


<entity name="Biology" emotion="Positive" score="0.6" />

<entity name="Physics" emotion="Negative" score="0.3" />


… here “Biology” and “Physics” are the entities; each has an emotion associated with it in the text. There are more positive emotions associated with “biology” than there are negative emotions associated with “physics” – that’s the score part.

Sentiment analysis is still a young field and frankly it gets things wrong a lot of the time. It’s also difficult to find a system that can do both entity extraction and sentiment analysis properly – to build a proof of concept I had to use a combination of Yahoo! Term Extraction and OpenAmplify.

Having said that, I think results over large datasets are promising. I’ve run a couple of thousand posts through the proof of concept system and compiled lists of the entities most strongly associated with positive and negative emotions in science blogs this week (published in the next couple of posts). Is this information useful? Interesting? Fun? Misleading? Any suggestions for how it might be presented are welcome!

A Catalog for Nature.com

We’re pleased to announce that Nature.com now has an OAIPMH interface. This service implements the Protocol for Metadata Harvesting from the Open Archives Initiative. This means that the Nature.com platform can now be queried by item, by title or by date range and that structured data records will be returned. All articles from over 150 titles can be accessed and dating back to 1869 for Nature magazine.


Queries are made with a simple request URL (via HTTP GET, although POST is also supported) according to the OAIPMH protocol. Result sets are in XML using formats defined by W3C XML Schema: either the OAIPMH base format for metadata exchange, i.e. Dublin Core, or an enhanced bibliographic metadata format, i.e. PRISM Aggregator Message (PAM) format.

In PAM format the results are very similar to the standard article descriptions published in our RSS feeds, or embedded directly within content entities (either as META tags in HTML, or as XMP packets in PDF). Further details on our use of PAM are given in a related post on CrossTech.

The Nature.com OAIPMH service has two access points:

User interface:

Service endpoint:

Special credits go to Jeff Young of OCLC for creating the excellent open-source OAICat software package, and to Nawab Siddiqui of Nature Publishing Group for doing all the heavy lifting in implementing this service for Nature.com.

Nature Darwin Debate 2: What Price Biodiversity?

Quick plug for a very interesting Second Life event coming up on Monday: the second in the Nature Darwin Debate series.

Panelists James Lovelock, Michael Meacher MP and Sir Crispin Tickell will join chair Ehsan Masood for a live debate entitled What Price Biodiversity? As before, the debate will be held at Kings Place, London (just behind the Nature building) and will be live streamed in Second Life for anyone who can’t attend in person.

The last debate was excellent, both in RL and SL and this one looks like it will be just as good. It is Monday, 7pm GMT/ 12pm Pacific time (shorter time diff than normal, our clocks go forward this weekend; Europe not until the end of the month) and all are very welcome! For all the details, see our Second Life blog.

Scientists, Unconferences and Culture Clash

Someone I know recently emailed me with the following question

I’m co-organizing a biomedical/healthcare-themed unconference to be held later this year, and culture clash has come up as an issue. Were you involved in any of the SciFoo events? Can you offer any advice for how to approach this? Any hard lessons learned?


Sadly I’ve not been to a SciFoo event yet, but I have been to plenty of scientific conference and one or two geek driven unconferences. From what I hear there are indeed some differences that emerge when unconferenceing with scientists compared to unconferencing with Geeks. For a start an important part of a scientists career development revolves around making well argued presentations of their work to their peers in the crucible of the conference. Add in the lecturing role and you have an individual who is very used to standing up in a room and presenting the complete story.

One of the goals of an unconference is perhaps to tease apart the complete and finished story, to look at the spaces in between and to be open to blue sky thinking. This may lead to a slight mismatch in expectation about the kind of conversations that the organizers might hope to happen at an unconference, compared to the mode of communication that a scientific group brings with them to the meeting.

I know that the SciFoo invite is very specific about this, and through application of the Chatham House Rule an environment of open discussion is fostered.

I’m sure many of the people out there reading this blog have some input into the question though, so I thought I would post here and see if any of you enlightened science geeks might have some advice for my friend?