Highlighting some iPhone app features

Picture 26.png

Hopefully if you’ve got an iPhone you’ve already had a chance to check out the new nature.com app, available now from an App Store near you.

Some of the app’s features aren’t immediately obvious which is something we’re aiming to fix in later releases (we’re also working on a video walkthrough).

In the meantime here are a couple of things I think are pretty cool:

Saving for later and syncing to the desktop

  • tap on an article, then on the “actions” button in the top right of the screen.
  • Choose “save for later”.
  • The article has now been saved to your phone and you can read it offline. Just tap the “Saved” tab to see a list of your saved articles.
  • It has also been synced to your account on the nature.com/mobileapps website. Go there and log in if you haven’t already, then click on the “Saved” tab just underneath the header.
  • On the website you can export citations for the articles you’ve saved. This uses Connotea so can be slow sometimes (we’re working on this too).
  • If you log on with the same account on a different iPhone (or Android device) your saved articles will follow you.

PubMed searches

  • in the iPhone app tap on the Search tab.
  • enter a search term in the search bar at the top of the screen.
  • before hitting the Search button on the keyboard press the “pubmed” button that has appeared underneath the search bar.
  • hit search – you should see results from PubMed that match your query.
  • tap on a result to read the abstract and to visit the full text version via the link at the bottom of the page.
  • on the search results page use the “save” button in the top right to save the search.
  • now whenever any new abstracts are added to PubMed matching your saved search they’ll appear in the Recent Articles view mixed in with the content from Nature.

We’re improving both of these things in the next version of the app – right now the PubMed search doesn’t work well for articles without a DOI and the offline experience leaves something to be desired. 🙂 As before if you have any bug reports or feature requests just let us know.

New Nature.com iPhone app

redux version of this post: we have a new, free iPhone app. Check it out. 🙂

NPG has a couple of big mobile related announcements going out today.

The first is that we’ll be supporting EPUB, the open standard for ebook readers. DRM free EPUB versions of our articles will be available alongside HTML and PDF making it easy for readers to read content on a device of their choosing. Here’s an example (you may need to right click and ‘Save as…’). We’ll be rolling out EPUB links on newer articles in Nature journal first; look out for them over the next couple of months.

Screenshot 2010.01.26 13.43.11.png

Screenshot 2010.01.26 13.38.17.png

 

The second is that we’re releasing free mobile applications for the iPhone (the initial version of which is available right now from the App Store) and Android (coming later in the year). I’ve got a couple of posts about specific features in the iPhone app lined up for later this week.

The two announcements are related: when you read one of our articles in the iPhone app you’re actually looking at the EPUB version which is generated on the fly from our XML content store. If you want to geek out a bit further you may find this talk by NPG’s CTO Howard Ratner interesting.

I’m proud of what we’ve accomplished with EPUBs and the mobile app; we designed and built everything in-house, with the mobile apps coming out of our NY office and the EPUB support masterminded by devs in London. As publishers we’ve been surprised at how well scientific articles work on smartphones given the right context… hopefully you’ll find the app useful too.

On that note please do download the app and try it out if you’ve got an iPhone or iPod Touch (or an iPad). This is just the first version of the app and we know there’s already room for improvement (more content, customization and improvements to the way some types of article are rendered, for a start).

We’re actively looking for feedback, so send bugs, suggestions and feature requests to mobile@nature.com and I can guarantee we’ll see what we can do! 😉 On that note over the next few months while we catch bugs and work out what users want from mobile apps access to Nature journal and Nature News through the app will be completely free.

Sensemaking in Multi-Fusion Environments

Jeff Jonas visited us again at the start of November and gave a talk about some of the new work that he is doing. Jeff is our first return speaker, and this time he gave us an update on his thinking about sensemaking systems and how that is effecting his on-going work in developing a new technology.

Jeff mainly works on building sensemaking systems that can reconcile large amounts of data in real time. In brief a sensemaking system is one that, in contrast to a data warehousing solution, does something active with each piece of data as it is acquired, rather than only storing the data for later re-use. Identity disambiguation is a problem that these class of systems have been applied to in the past, however the new technique will be more generally applicable. One of the difficulties with the sensemaking problem is that any individual piece of data that arrives, on its own, is hard to evaluate in terms of how important it is in terms of relevance. Each piece of data needs somehow to be contextualised first. Jeff illustrated the underlying mechanics of such a system with an analogy to jigsaw puzzle solving.

When solving a jigsaw puzzle we make an assertion with each new puzzle piece that we pick up, it either fits perfectly in some place in the evolving solution space, or it belongs to a similar set of pieces but we don’t exactly know how yet, or one has no idea where it goes so it is placed anywhere.

When asserting that the new piece fits into an existing piece one always favors the false-negative, as one never puts pieces together unless we are really sure that they go together.

When we get a new connecting piece we re-consider if by now knowing this, other previous pieces already considered have a better placement.

Sometimes a new piece reverses an earlier assertion – e.g., determining where a piece belongs reveals a connected piece that upon closer inspection, really did not belong. In this case, this misplaced piece is removed.

The working space needed during the process of solving the puzzle is much larger than the final solution space.

From this description it sounded to me that Jeff’s system accumulates attributes of the things that we are interested, in folders, and then makes connections between these folders as new pieces of information come in to the system.

One of the important characteristics of a usable sensemaking system is that it needs to be able to change its decision state as new information comes in. This is to ensure that the system does not drift from the truth as arriving new data invalidates earlier assertions.

Systems like this end up expressing bias base on the observations they have received. So in theory, an organization could ingest slightly different data sets into different instances of the program in parallel and poll them for their views. One would be able to see dissent between these different instances.

A key issue about sensemaking is the ability of the system to count discrete objects. If you can’t count the discrete entities that you are interested in, then you can’t expect to produce high quality predictions. This is the key principle behind the new technology that Jeff is developing—on that this new work is a general counting engine. With this in mind he is currently looking for hard science problems that such an engine could be applied to, and this was one of the reasons for his visit in to our offices, so if anyone reading has some ideas please post them and we’ll pass them along.

One good way to disambiguate things is being able to track their spacetime and life arcs. The same thing cannot be in two places at the same time (at least it can’t if it is large enough to not be concerned with quantum mechanical effects) and the path something takes over space and time (life arcs) can be itself be a discriminating signature of identity. Science produces very large data sets, and some of these data sets are produced quickly. Jeff hopes to be able to find problems that would benefit from the disambiguation techniques that he is working on. Trying to imagine which types of data in the scientific realm would be a good candidate for this kind of analysis raises some interesting questions. Most science is produced through publication, which is a slow process and is not very real time. That said, Pubmed indexed a new paper about every 40 seconds in 2009, which is quasi-real time. Often it’s not individual members of a class of objects in which we are interested. It’s not a given Higgs Boson that interests us, but rather all characteristics of all Higgs Bosons. That said, one of the most important jobs of detectors at particle accelerators is indeed to do exactly the event disambiguation of particle trails that uses spacetime paths as the key discriminating factor.

I wondered whether in the context of scientifically interesting objects one could try to do this disambiguation of paths by projecting into a more general higher-dimensional parameter space. Jeff was very clear on the point that as far as he was concerned spacetime and life arcs are the gold standard in this regard, and I’d have to agree with that, however I think that the idea of using higher dimensional parameter spaces has some merit.

As with his last visit, Jeff reserved his most thought provoking idea till last. Quite recently he has been fascinated by the growing number of systems used by some companies to track mobile phone trails (life arcs). There are 600 billion transactions being generated daily in the US that contain geospatial data. Your travel patterns reveal where you spend your time, who you spend your time with, and they are highly predictive. The data is being de-identified and being shared with 3rd parties, however re-identification of an individual, in most cases, is trivial.

This data can also be seen in real time. It can give real time analytics on the health of a store; how many people are visiting in real time, what is their average journey distance to get to that store, is that number going up or down? Jeff suggested a number of ways to raise consumer awareness of the power of this kind of information. He suggested that phone companies should provide information such as the first name and first initial of the last name of the 10 individuals that you spend most of your time with not at work or at home (notably: if there is a name on the list you do not recognize, they are probably following you). There has been some research into analysing these trails, but it’s clear that we are just beginning to scratch the surface on this.

Google Wave Science Hack Day at Nature this Friday

I’m really happy to announce that we will be hosting a hack day on Friday for developing scientific applications in google wave. The event was thought up by Cameron Neylon and we at Nature were able to find a room and are able to provide interweb access and coffee. The JISC DevSci project and Google will be providing Pizza. If you have a google wave account you can check out the wave discussing this event.

We will have a number of our onsite developers taking part, some external people coming in, and quite a few people will be joining in remotely via Wave. It’s going to be very interesting to see how a full day of collaboration through wave works out.

The exact number of people coming in has not yet been finalised, but we do have some extra spaces, so if you are a developer with a wave account in the London area or you are a scientist with some great ideas for apps that could work well for scientists then please feel free to drop me a line and we will see if we have space to fit you in. You can email me at i.mulvany@nature.com. You can, of course, pop in via wave and say hi.

The hashtag for the event will be #swlhd (science wave london hack day) if you are into that kind of thing.

From Web 2.0 to the Global Database

I’m on my way home having just attended the 2009 Microsoft eScience Workshop at Carnegie Mellon University in Pittsburgh, where Tony Hey and his team at Microsoft Research also launched a book called The Fourth Paradigm. It’s a collection of essays that provide relatively accessible accounts of the impact and potential of digital science, and has been published in memory of Jim Gray, a pioneer in this area.

I delivered a short talk summarising my essay, which was called “From Web 2.0 to the Global Database”. I’m reproducing the text below, together with some of the slides I used to illustrate my talk.

(Update 20/10/09: Added link to book website.)

Read more

Demo Web Clients for nature.com OpenSearch

opensearch-client-dc.jpg

(Click image to enlarge.)

[Update – 2009.10.05: This post (2. Clients) is one of three. See also: 1. Service, 3. Widgets.]

The previous post described the nature.com OpenSearch service. Prior to that I posted on our new desktop widgets which use one of the XML interfaces – specifically the RSS feed.

Here we wanted to also show what can be done in the browser itself. We’ve created a small gallery of demo clients which all use the text-based JSON interface (or rather JSONP for cross-site scripting purposes). You can find the demos here:

https://nurture.nature.com/opensearch/apps

These demo apps show how the JSON interface can be used to build very simple web clients for search. They make use of an early OpenSearch JavaScript library which has classes for OpenSearch and SRU responses. The demos show how to link back to the nature.com platform (using the DOI), how to locate metadata properties, how to use OpenSearch links for pagination, how to compare OpenSearch and SRU views, how to extract RDF triples, etc. They are simply intended to show how easy it is to access nature.com search remotely. We hope you find them fun to use.

nature.com OpenSearch

opensearch-interfaces.png

(Click image to enlarge.)

[Update – 2009.10.05: This post (1. Service) is one of three. See also: 2. Clients, 3. Widgets.]

Earlier this week we soft-launched a new service: nature.com OpenSearch. Simply put, nature.com OpenSearch provides a structured resource discovery facility for content hosted on nature.com. In effect, this is a sister service to our regular nature.com search service which allows a user to query nature.com and browse the result sets. By contrast, the new service allows applications to query nature.com and to fetch the results back in formats of their choosing. The diagram above attempts to compare the existing user-oriented nature.com search service at a) with the new application-oriented nature.com OpenSearch service at b). Applications from widgets to web pages (and beyond) are the immediate clients of the service. (A companion post here already discussed the new nature.com search search widgets which are one such application.)

In terms of interfacing to the service, machine-readable description documents are available for both OpenSearch and SRU (Search and Retrieval via URL) modes of access. These documents are referenced from autodiscovery links which are beginning to be added to all our nature.com web pages. Each web page thus links not only to our search, but more than that it provides the instructions on ‘how to search’.

Query is either by simple search terms or by the query language CQL which is a high-level query language designed to be be human readable and writable, and to be intuitive while maintaining the expressiveness of more complex languages. Result sets can be returned in a variety of media types, both text (HTML and JSON) and XML (SRU, ATOM, RSS). Media types are selectable using HTTP content negotiation or by using the specific parameter ‘httpAccept’.

opensearch-querytype-results.png

(Click image to enlarge.)

And what does this all mean? Well, it really amounts to the ability to run off-platform search, i.e. I can now run my search over nature.com anywhere I choose to run it. For example, say I want to run it right here in this blog post, I can. Let’s jig up a simple interface. What we’ll do is to run a full-text keyword search and just list out the raw properties of the first item returned with no real attempt at styling. (The CQL checkbox just allows a CQL query to be input, otherwise the search terms are sent to the server as simple alternates.)

Read more

Desktop Widgets: nature.com search

opensearch-widget-fliprollie.jpg

[Update – 2009.10.05: This post (3. Widgets) is one of three. See also: 1. Service, 2. Clients.]

The newly launched nature.com OpenSearch web service (which I’ll discuss in a separate post) is an interface that provides distributed access to search on the nature.com platform. Specifically, the interface allows for structured queries from remote clients as well as for structured responses, and implements two compatible industry standards for search: OpenSearch and SRU (Search and Retrieval via URL)

As a practical demonstration of this distributed access we have developed a nature.com search desktop widget which is a small standalone app that runs on a user’s desktop and interacts with the nature.com OpenSearch server by sending a simple URL request and receiving in response a regular RSS feed. This URL request closely mirrors the request strings in the OpenSearch URL templates that are now being linked to from a growing number of our web pages.

Read more

Anyone interested in a Google Wave Scientific Hackfest in London?

Well, today is the rollout of Google Wave proper for a select 100, 000 accounts. If you have an invitation, are up for a bit of hacking and have an interest in creating scientifically relevant applicaitions then Cameron Neylon wants you!

He is calling for interested people to sign up for a science google wave hack day either in London or somewhere near Didcot. I think some people from NPG might pop along. If you are interested go let Cameron know over on the doodle online poll.