On Saturday the Science Online London 09 conference took place. The conference tag was #solo09. Martin Fenner has already gathered together some reactions to the conference. In the afternoon I had the pleasure to co-present on Google Wave with Cameron Neylon and Chris Thorpe. Cameron has already written up some reactions to our session.
We demoed the core google wave features, including a number of science related robots, some of which we prepared for the meeting. Chris demoed his robot Grauniady. Cameron demoed Chemspidey which queries Chemspider and calculated molecular weights. I demonstrated Janey, a robot for querying the Journal Author Name Estimator service from the BioSemantics people.
On the Thursday before the conference we tried out a few of the robots that we thought might be interesting to look at. If you have sandbox access then you can have a look at our sciency robots wave. I’ve just made this wave public, and our original instance runs up to edit 18. For those of you that don’t have wave access; Cameron had posted a short video of this wave being created.
While looking at the robot that creates graphs I realized that I could generate a co-authorship network form the data being returned from JANE. On the day before the demo I added a call to JANE that would do just this and that would print 12 of the links from the co-authorship network in a way that the graph robot would understand. The nice thing about this is that it begins to make two unrelated robots work somewhat together. Cameron has also captured a short video of janey and graphy working together.
At the demo many of the people who I would hope end up being end users for wave were a somewhat nonplussed. I think one of the issues is that what we were demonstrating seems so transparently easy. I think it would have been instructive to compare what janey and graphy were doing with the text in the document, to how one might go about getting the same results under a more traditional workflow. I think that would work somewhat as follows:
- open document editor and create text
- open browser and go to the JANE web site
- cut and paste text into JANE submission form
- save returned xml to a new file
- parse xml with python to create .dot file
- run graphviz on .dot file to generate image file
- open image file in previewing program to make sure it looked OK
- finally import image file into document editor
For my money the best question of the session came from AJ Cann. He asked us to put wave next to other advances in technology and compare it. Was it like the jump from gopher to mosaic? I found it quite hard to formulate a response to this question. Anil Dash may well have it right that google wave won’t gain adoption, because it’s not the web way. This is looking at the issue from the perspective of developers, but I think that where the promise lies is in brining a bunch of API’s and web endpoints transparently to the user, through the document via the robot. What I would say now to Alan’s question is that it is like the difference between switchboard operators and direct dial, but where the communication endpoints are programatic interfaces on one end, and humans on the other. At the moment we need to call an operator (a programmer or a web form) to create such an connection for us. There is still a lot of friction there. Thought these mediation still have to be embedded in robots by programmers within wave, as such entities become common, to the human user the ability to connect to programatic endpoints on the web will become as simple as adding a participant to their document. That such connections exist will become increasingly expected.
It might do for API’s what the iPhone did for mobile apps. That’s assuming that it all takes off, of course.