Five years of polling the computational chemistry community

At Nature Chemistry we all love Density Functional Theory and we all love polls – so what could be better than a poll on DFT? Marcel Swart, Matthias Bickelhaupt, and Miquel Duran have, for the last five years, been running a poll to find out which functionals the computational chemistry community like or dislike. The results of the the 2014 poll are now out and we have a guest post from the three of them to explain a little more about it.

I’m sure we’ll be seeing this as a question on Family Fortunes/Family Feud soon: “We asked 100 people to name …. a popular density functional”

Gavin

(Senior Editor, Nature Chemistry)

*******************************

Since 2010 we have been organizing an annual online popularity poll for density functionals where we probe the preferences of the computational chemistry community for their preferred Density Functional Theory (DFT) functionals. It all started with a presentation by Matthias Bickelhaupt (Feb. 2009) that showed the values of various chemical properties calculated using quite a number of different density functionals. Miquel Duran suggested, essentially, ‘averaging’ a number of these values but with appropriate weightings applied that reflected how ‘good’ the employed functionals were. Thus obtaining a ‘consensus’ density functional result. This could then act as a measure of how well the computational chemistry community is doing if compared to state-of-the-art reference data.

In order to get the weightings needed for this procedure, we have held annual online polls where people could indicate their preferences for a number of density functionals. The polls were announced on the Computational Chemistry List (CCL; a mailing list where people can ask for advice about any aspect of computational chemistry), on Twitter, Facebook, blogs, and so on, in order to get the maximum number of participants. The aims of this poll were: (i) to probe the ‘preference of the community’, that is, setting up a ranking of preferred DFT methods; and (ii) provide a compilation of the ‘de facto quality’ that this implies for the ‘average DFT computation’.

The DFT poll has led to, what some might deem to be, a polarization of the field, with some people clearly in favor (e.g. Steven Bachrach, Gernot Frenking, John Perdew, Henry Rzepa), and others clearly against. Especially this year there has been a vivid debate on the CCL mailing list in the days after the poll was announced (CCL June 1st entries, CCL June 2nd entries, CCL June 3rd entries). This may, or may not, be related to the fact that in 2013 we had to disqualify one functional that had fallen victim to a blatant attempt to bias the outcome of the poll. This has not happened again this year (because we switched to another survey provider).

In order to stress the motivation for holding the poll, we as organizers felt we needed to add a statement as well, since we “are simply monitoring what happens in the field of DFT and comment on how the choice of the community differs from (or agrees with) reliable reference data. In that way, we do exactly what should be done, namely ‘drive science through evidence and logic’ or maybe even ‘drive science back to evidence and logic’ (because, against all basic principles of science, the community often just follows blindly a fashion)”. And as one message nicely described: “Yes, it is not scientifically sound, epistemologically correct, platonically unsullied. But at least it is fun. We should appreciate fun in chemistry”.

The nice part of the popularity poll is that it opens the field to newbies, who through the poll have a better understanding of which are the most popular functionals within the field  — which can serve as a good starting point for those looking for the best functionals for their given problems. For instance, the rise of the wB97X-D functional is nicely reflected in this year’s results (moving upwards to 4th place after PBE, PBE0 and B3LYP), as was the fact that the winner of the first editions (PBE0) was largely unknown to many people. The years after the first edition (in which it ‘won’), the number of PBE0 citations has increased considerably, by at least 60% (see Figure 1) for the three PBE0 papers (there is the original paper by Perdew and co-workers where they describe the rationale for using 25% Hartree-Fock exchange; and there are two separate papers that introduce the functional as being PBE0, by Ernzerhof and Scuseria, or by Adamo and Barone).

Figure 1. Normalized number of citations for PBE0 papers, before (2006-2010) and after (2011-2013) the first news-item of the DFT poll (100 = average number of citations for 2008-2010 for each of the three papers separately).

Figure 1. Normalized number of citations for PBE0 papers, before (2006-2010) and after (2011-2013) the first news-item of the DFT poll (100 = average number of citations for 2008-2010 for each of the three papers separately).{credit}Courtesy of Marcel Swart{/credit}

Given that this is the fifth edition, we asked several researchers in the field (in Sept. 2014) whether they were in favor or against the DFT poll.

Steven Bachrach: “Please feel free to quote me from the Wiley Interdisciplinary Reviews article [“It would be nice if we could somehow again reach some consensus regarding a uniform standard computational method that experts and non-experts could rely upon for most situations. A challenge I make here to the computational community is to try to reach an accord on establishing a standard methodology. Perhaps a conference could be called where leaders propose their best methods and after discussion, a vote yields a recommendation for the greater user community”] and from CCL [“I also think the poll has value in discerning trends, especially new functionals to appear on the list and ones that have fallen down or off”].

John Perdew: “The DFT popularity poll is somewhat like citation analysis: It measures (but in a different way) how well a functional has been received by a set of readers and users.  There are many reasons why some functionals are received better than others: accuracy, reliability, wide applicability, computational efficiency, well-founded construction, availability in standard codes, reputation of the functional and its authors, historical priority, novelty, and even hype.  The poll has to be seen as measuring all these things, and perhaps more. To the extent that the polled scientists use rational criteria, the results of the poll can point other scientists toward good or interesting functionals”.

Henry Rzepa: “I still think the context of any vote cast is absolutely crucial. Perhaps what the community needs to develop is a public set of conformance test sets of molecules, one for each type of property?”

Andreas Savin: “I must shamefully confess that I do not know about the DFT popularity poll.” (after having received more information): “I will not participate, as this poll is intended for people who apply DFT, and I do little in this direction, but I find it interesting. I am amused to see that B3LYP is not as popular as generally believed, and LDA has such a high rank. How does it compare to the number of citations?”

Gustavo Scuseria: “I am not in favor or against the poll. It is interesting though that we need a contest to determine what is popular and useful. A cacophony of functionals have mushroomed in recent years, and I am very much afraid that uncontrolled approximations and rampant empiricism have taken over DFT”.

An interesting addition was brought forward in the discussions this year by Henry Rzepa (June 1, 2014), who suggested that, in the future, the poll should be extended to enable participants to explain why they like or dislike a given functional. This is a new interesting feature that will be added to next year’s edition: for each functional the participants can indicate for a number of properties whether they love using the functional, or rather dislike it. Rzepa proposed a number of properties (reaction barriers, normal mode analysis, NMR shieldings, etc.), which will be fine-tuned before the polling season opens again on June 1 2015!

Marcel Swart, Matthias Bickelhaupt, and Miquel Duran

News and Views: Molecular motor speed limits

The July issue of Nature Chemistry features a paper from Stephen Meech, Ben Feringa and co-workers that looks at the ultrafast dynamics of a unidirectional molecular motor. Such motors work through a two step process and enough is known about the thermally-driven second step to be able to improve its efficiency through molecular design, but not so much is known about the light-driven first step, the power stroke. After the Meech and Feringa paper, however, we know quite a bit more!

The paper is also discussed in the issue in a News and Views article from R. J. Dwayne Miller. Professor Miller got so excited with the topic that he wrote more than we were able to publish (!) so we said that we would put his unabridged introduction on here. So it’s like ‘News and Views: The directors cut’.

We can only put the unpublished intro up here, so if after reading this you want to hear the end of the story, the full News and Views article is here, and the original Article from Meech, Feringa and colleagues is here.

Gavin Armstrong

Senior Editor, Nature Chemistry

 *************************

Light-driven molecular motors: What are the quantum limits to work at the molecular level?

The motor has been the ‘engine of science’ for over two centuries, enabling humans to do more work than possible given our limited anatomy. In a way, the motor gave us superpowers. A single person operating a machine powered by some form of motor or engine can do many orders of magnitude more work than prior to its inception. Some of the modern day marvels of engineering such as spacecraft enable us to do effectively astronomically more work to literally go into the heavens.

Based on the enormous importance of motors in driving the industrial revolution, it is natural to wonder what the fundamental limits are to the amount of work that a motor can do. It was precisely this question that led to the one of the greatest achievements in science: the formulation of the thermodynamics [1]. Here it has to be appreciated that the steam engine was developed through a serious of successive steps that can be traced back to key developments dating back to Savery’s first patent (1798) to engineering improvements by Newcomen and further improvements in efficiency with the development of the condenser by James Watts.  Each key step led to an increase in output power and efficiency even at a time in which we did not know the origins of the very energy that drove it [1].  It was natural to wonder how much work could be extracted from motors as each advance seemed to bring ever increasing amounts of work to bear on a problem.  Careful measurements by Joule established that energy can appear within a system as heat or work and that this energy was unconditionally conserved. These observations led to the first law of thermodynamics [2] and ruled out the possibility of perpetual motion machines ‘of the first kind’.

There were, however, still interesting conundrums regarding the apparent paradox of coupling engines or motors with different efficiencies together, which lead to the possibility that one might be able to extract energy from the surroundings without requiring an energy source. These considerations led Carnot to posit one of the most brilliant examples of logic ever exercised in his reduction of the maximum amount of work that can be extracted from a system, an engine this case, in thermal contact with its surroundings. This formulation led Clapyeron to the Second Law of Thermodynamics and elimination of the possibility of perpetual motion machines ‘of the second kind’ [2,3].

The connection of entropy to microscopic principles with Boltzmann’s derivation for the entropy and Nernst’s formulation of the Third Law of Thermodynamics put the connection with entropy and extractable work on a quantitative basis [3]. In parallel, the genius of Gibbs led to the formulation of the free energy state function that encompassed both enthalpic and entropic driving terms for a particular process, and enabled the prediction of the maximum amount of work that can be extracted from a system [3].

From these historical reference points, one can see that the motor as a conceptual construct has truly been an engine for advancement of science. The initial motivation was to maximize efficiency and to scale up motors to do ever increasing amounts of work. What about the opposite limit? How small can we make a motor?  The ultimate limit of course is to construct a motor on the molecular level. Are there different scaling relationships in terms of efficiency as we go to the molecular level?

At this point we should recognize that living systems long ago mastered the ability to make molecular motors, for the transport of proteins, motility, transport of charge etc. The protein assemblies that carry out various functions for the cell are marvels of molecular engineering. It is only recently that we have developed the tools to monitor the functions of these assemblies.

One of the most remarkable examples of a biological motor is the ATPase motor protein that is involved in motility through the rotation of flagella. It has in fact been possible to directly determine that these motors operate very close to theoretical efficiency limits [4]. The degree of efficiency is all the more remarkable when one recalls that these systems are functioning within stochastic limits. The collisions and exchange of energy of a molecular motor with its surroundings at this scale is orders of magnitude more than the power generated to do work on the surroundings. For example, the collisional exchange of energy of a molecule with the surrounding bath molecules is on the order of KT at a collision rate of 1012 sec-1 for a power dissipation rate of nW [5]. In comparison, a typical turnover rate of a motor protein (20kT barrier), involving the conversion of a typical bond energy providing 4×10-18Joules of energy per molecule, is typically on the millisecond timescale such that only 4×10-15 Watts are involved in carrying out the work relevant to function [5]. This is more than 5 orders of magnitude less than the power dissipated through stochastic fluctuations within the immediate surroundings of the molecular motor.  Imagine trying to do work under conditions where you are being battered about by random forces that are orders of magnitude more powerful than your feeble attempts to move ahead.

Let us consider the fundamental constraints with respect to making molecular motors. First, thermodynamics is based on microscopic principles related to thermal motion of molecules and atoms. There are no loop holes in the laws of thermodynamics at the nanoscale that enable higher theoretical efficiencies for molecular motors (one, however, can wonder about possible increases in effective efficiency as friction and associated losses become ill-defined).  Within the depiction of Carnot, a molecular motor immersed in its surroundings cannot sustain a temperature differential to harness thermal motion into direction and thus execution of work on the surroundings. Simply put: no work is possible without the input of an energy source into the system.

Second, the structure of the system must involve an asymmetry, much like a ratchet, such that the motions activated by the energy source go into a given direction related to the function of the motor [5,6]. Think about all the engineering that goes into a combustion engine. A typical combustion engine has very rigid walls, stable to high operating temperatures (to maximize operating temperatures and thermal gradients), and pistons with a lubricant to reduce frictional losses, so that the energetic motions of the product gases from combustion lead to unidirectional displacement of the pistons. This displacement is converted into rotary motion to move an object or do work on the surroundings. The efficiencies of modern day gas engines are around 25% with diesel engines running at close to 50% efficiency [5].

In this context, think of the challenges at the molecular level. One has to construct a molecular system in which specific motions of the motor are driven over other loss channels by imposing a highly asymmetric potential to the reaction coordinate coupling the energy source to the motor’s functional motions. The problem is that molecules are not rigid like macroscale engines. Despite the many orders of magnitude smaller size of molecular motors relative to macroscale systems, there are many more uncorrelated, independent degrees of freedom, with motions comparable to the motor’s functionally relevant motions. In scaling the problem, it would be impossible to imagine how rough the ride in a ’molecular car’ would be. Here it has to be appreciated that these other uncorrelated motions act as loss mechanisms in terms of efficiency. The fluctuation and dissipation processes leading to frictional (entropic) losses are comparable to mechanized motions of interest. In this sense, molecular scale frictional losses are actually more of a problem than in a macroscale motor.

One way around this dilemma for molecular motors is to design the process so that the functional motion occurs faster than entropic losses, or in molecular terms ideally faster than intramolecular vibrational energy redistribution (IVR) within the molecular complex, so that all the energy goes into the designed motions.  This would give the highest possible efficiency. Here it has to be appreciated that we are talking about quantum speed limits to molecular reaction dynamics. For slower processes there will be energy losses. As an additional consideration, collisional exchange or intermolecular energy redistribution represents energy losses to the surroundings. This time scale defines the lower limit to the required speed of the “motorized” molecular motions or the time scale involved in barrier crossing for the key power strokes. This problem in optimization requires dynamical information on the relevant motions and the competing pathways for energy dissipation. In this respect, the work of Meech, Feringa and colleagues is significant as it provides the first direct dynamical information on the primary motions of a synthetically designed molecular motor [7].

References

1.      M. Kerker, in Technology and Culture Vol. 2, No. 4 pp. 381-390 (Wayne State Univ. Press, August 1961); https://www.jstor.org/ stable/3100893.

2.      John Hudson, The History of Chemistry (The MacMillan Press, London, 1992), pp. 214-216.

3.      W. J. Moore, 4th Ed Physical Chemistry (Prentice-Hall, Englewood Cliffs, New Jersey, 1972), pp. 77-109.

4.      H. Itoh, A.Takahashi, K. Adachi, H. Noji, R. Yasuda, M. Yoshida, and K. Kinosita, Nature 427, 465-468 (2004); R. Yasuda, H. Noji, K. Kinosita and M. Yoshida, Cell, 1998, 93, 1117–1124.

5.       A. Coskun, M. Banaszak, R. D. Astumian, J. F. Stoddart and B. A. Grzybowski, Chem. Soc. Rev., 2012, 41, 19–30.

6.       R. D. Astumian, Science 276, 917–922 (1997).

7.       J. Conyard, et al. Nature Chem. doi:10.1038/nchem.1343 (2012).

***************************

To find out more read the rest of the News and Views article here, and the original Article from Meech, Feringa and colleagues here

Of polemics and progress

As Stuart posted last week, the March issue is now live and it features a ‘web focus’, which is a small collection of articles related by a topical theme and brought together on their own special page on the Nature Chemistry website. The theme of this web focus is protein dynamics and we have two Perspectives (Good vibrations in enzyme-catalysed reactions and Taking Ockham’s razor to enzyme dynamics and catalysis) and an editorial covering the topic.

Taken from Glowacki et al.

It’s a somewhat contentious topic, in that many disagree on the effects that structural protein dynamics can have on the reactivity of enzymes. One of our Perspectives highlights evidence in support of such promoting effects and the other backs the stance that they are not required to explain enzyme reactivity.
 
Such disagreements between researchers are a common occurrence in science, and they can have both positive and negative effects on the topic under scrutiny. An abridged version of our editorial, which discusses this in more detail is below. The full text can be accessed here and is available FREE to all registered users. Please do join the discussion using the comments section below.

Gavin

Gavin Armstrong (Senior Editor, Nature Chemistry)

— — — — — — — — — — — — — — — — — — — —

Disagreements are common in science and can lead to better understanding, but must be handled carefully.

Science is, in essence, the pursuit of ‘truths’, but not all seemingly valid hypotheses are accepted as such. From the laws of planetary motion, to the wave–particle nature of light, and through to the recent suggestion of lifeforms that incorporate arsenic into their DNA1, disagreements born of critical thinking have played a central role in answering important scientific questions.

In this spirit we have two Perspective articles2, 3 in this issue that take differing views on the significance that structural dynamics have on the reactivity of enzymes. It is a debate that has been building over the past decade4, 5 and centres on the possibility that the motions of enzymes could amplify the contribution that quantum mechanical tunnelling makes to their activity.

The fact that the conformational movements of proteins can be integral to their function is broadly accepted. They can be important to ligand binding and release, and in allowing access to a given active site through large-scale loop–lid movements. The debate arises when discussing the reaction step that the enzyme catalyses, the all-important transition from reactant to product. At this point, how important are the fast structural fluctuations of the protein, motions such as vibrations? Can they significantly promote a reaction or can enzyme reactivity be accounted for with a model based on transition-state theory, which does not consider such individual atomic motions?

A Perspective article from Sam Hay and Nigel Scrutton highlights2 the evidence in favour of such promoting motions and an alternative view comes from David Glowacki, Jeremy Harvey and Adrian Mulholland, who take ‘Ockham’s razor‘ to the problem3.

These Perspectives2, 3 and recent contrary results6, 7 suggest that this debate is not likely to subside in the near future. The one thing that is agreed on, however, is that more direct experimental evidence is required if a significant role for enzyme dynamics is to be wholly accepted.

Protein dynamics is far from the only topic in science in which there are fundamentally different views taken by those actively studying it. And although such differences in opinion can be stimulating and drive advances in the field, they can also have the opposite effect, creating issues that impede progress.

The publication of articles in divided research fields can be very problematic for all involved: authors, referees and certainly editors. The benefits, and indeed the purpose, of peer review can be undermined by authors who create long lists of excluded reviewers and by referees who are either hypercritical or overly positive. Editors can find themselves in a situation where their initial choice of referees could dictate the fate of an article because of the chosen referees’ ‘allegiances’.

Although editors normally honour requests to exclude potential referees, in some cases, to ensure that a paper is competently refereed, this is not possible. To help editors in making such decisions, authors should provide details explaining why they have asked for a potential referee to be excluded; simply stating that there is a ‘conflict of interest’ is not enough in controversial fields.

Even when fundamentally disagreeing, referees should try to judge the technical aspects of research articles on controversial topics, and should thoroughly explain their subjective opinions on a paper’s possible significance. Such topics inevitably generate alternative interpretations of the same data, and without enough evidence to conclusively prove one over another, referees should not recommend the rejection of technically sound, well-analysed data simply because the author’s interpretation is incongruous with their own.

All practitioners of the scientific method know that disagreements can occur, but human foibles must not be allowed to hamper the communication of competently acquired results. Although progress in the study of enzyme dynamics has not been smooth, it is a relatively new field and further research is certainly needed if a consensus of opinion is to be reached. For now, we shall leave it to the reader to make their own mind up.

Fall MRS Meeting 2011: Analogies, highlights and trivia

I’ve spent the last week in, as Ros Daw described on Wednesday, a relatively balmy Boston, mooching around the halls of the Hynes Convention Center and the Sheraton diving in to whichever session of the Materials Research Society meeting took my fancy. Unfortunately, there’s now a very cold bite to the air in New England but thankfully I’m on my way home to the Old England.

It was my first MRS meeting but, being a bit of an ACS meeting veteran, I was expecting something very similar to that but smaller, like an MRS slider to the ACS Big Mac, if you will. And that’s exactly what I found: you have a convention center with lots of parallel sessions, a nearby hotel housing some more sessions, and an Exhibition Hall with lots of people trying to sell stuff. But (dropping the burger analogy for a scientific one), like the nanomaterials discussed in many of the sessions this week, because of confinement effects, the properties of a meeting are not linearly related to their size.

With a smaller meeting (6000 attendees rather than >10,000 seen at ACS meetings) comes the benefit of a smaller meeting space, which leads to a much more ‘intimate’ event. Intimate is probably not the right word with 6000 people involved but if you know someone here then you’re very likely to see them, which in my experience is not how something of the size of the ACS meeting works. It also makes it far easier to go to numerous sessions on a given morning or afternoon, which, given the diverse interests of academics these days, is a big benefit. So I think that this is a perfectly-sized meeting and I gather from the attendees that I met, who keep coming back, that they do too.

There have been a few highlights for me over the week. The symposium on ‘Organic photovoltaic devices’ was my default pick for whenever I was unsure where to go, I was always likely to find something to hold my interest there.

During the meeting, Z.L. Wang (Georgia Tech) received a MRS medal for his work on ZnO nanomaterials, and therefore gave an associated presentation. The goal of much of his ZnO nanowire research is to harvest mechanical energy. Therefore when we do stuff — walk around, work out, move our fingers to play sports games on Xbox — those movements could be used to generate enough power to charge your iPhone or any other portable device. The nanowires are piezoelectric; that is, they generate a voltage when they are bent, and Wang has been working towards improving their efficiency to make them viable for various industrial applications.

This week I saw another take on the same problem when Tom Krupenkin from the University of Wisconsin-Madison discussed his recent work (published in Nature Communications) on using ‘reverse electrowetting’ to harvest energy. At this point I was going to give you a lovely description of how it works, but it seems Katharine Sanderson has already done it over at Nature News. So very briefly, a conductive liquid, if placed on an electrode, can be deformed by charging the electrode surface. This improves the electrode’s wettability and allows the droplet to spread out better. This can also be done in reverse: if you are able to physically deform a droplet on the surface of an electrode (by movement), you can create a charge and thus power. Krupenkin was able to apply this principal to an array of 150 droplets and talked about the possibility of placing such generators in to the heels of shoes. It was a nice talk and I recommend reading more at Nature News and Nature Communications.

I also enjoyed the presentation given by Paul Alivisatos very much. His talk was to celebrate his Von Hippel Award, the highest honour at the MRS society and was nicely balanced between anecdote and cutting-edge science. As a student at the turn of the century working on a completely different topic, I wasn’t particularly aware of the synthetic work of Alivisatos, but that soon changed when I started working for the Journal of Materials Chemistry at the Royal Society of Chemistry; every other paper I read involved the synthesis of nanoparticles, with chemists showing how it was possible to control their size or shape. Given that that was my introduction to the field and that now you can dial up many different structures and sizes, it was nice to go back to the beginning and hear a few tales from when it wasn’t quite so easy (Alivisatos was actually warned off working with them by a theorist colleague!).

And so my one bit of chemistry trivia to give you all comes from Alivisatos. So you know those ‘nanocrystal molecules’ that Alivisatos and his colleagues made by joining nanoparticles together using DNA links? You know whose idea the DNA was? No? Well it was Stanley Miller, of origin of life/amino acid fame! Alivisatos was asked to give a talk by UC Irvine students with the theme “what would you like to be able to do but can’t”. He mentioned the idea of linking nanoparticles together and that they were working on some organic compounds to do just that. Miller was in the audience and apparently put his hand up and said they should try DNA, the rest, as they say, is history. I just looked at the Letter in Nature and there is indeed an acknowledgment to S. Miller.

It might be a little too soon for me to go to the San Fran MRS meeting next Spring but I’ll certainly be thinking about returning next Fall.

Gavin

Gavin Armstrong

Senior Editor

Nature Chemistry

Keeping up with the journals

[This post is an abridged version of the editorial in the September 2011 issue — the full text can be accessed here, available for free to all registered users. We welcome feedback on our editorials in the comments section below.]

With more and more scientific articles and journals being published, how can you effectively keep abreast of new research relevant to your own projects?

The ever-increasing number of chemistry-related journals and articles has been discussed and debated for years. Usually the focus falls on three issues: the increase in the overall number of articles, the increase in the number of (usually more specialized) journals, and the fragmentation of results by researchers to maximize their number of publications. The first issue is easy to explain: there are simply more scientists now and they all depend critically upon the publication of their work. Few scientists would argue against more science and this issue seems here to stay.

The increase in the number of specialized journals is a more contentious matter. In 1973, a group of eleven concerned chemists lamented the “recent proliferation of journals”. They argued that “the literature should be so constructed as to deter trends towards overspecialization, and should foster communication among chemists working in different areas”.

The sentiment about improving interdisciplinary communication is admirable, but the general growth in the number of publications makes it unrealistic to expect researchers to keep up with current studies by reading only a few select journals. Compartmentalization was an inevitability that has some sound logic behind it — researchers can read specialist journals knowing that they will find papers of interest and can publish in them assured that their peers will be more likely to see their work.

With the literature now so vast, keeping abreast of what is going on in a given scientific field has become a real challenge, but remains an important aspect of practising cutting-edge science. Not knowing about a published paper relevant to your research can have detrimental consequences when trying to get published or funded.

To keep track of the literature, the Nature Chemistry editors all use RSS feeds with a feed reader that allows papers to be shared among the team. You subscribe to the feeds of journals and when they publish an article the feed is updated. This might sound just like an e-mail alert or like browsing the journal website, but RSS feeds are far more straightforward to organize, track and search.

There are other online tools that can be used in a similar way. Twitter, as a rapid online information exchange, is a great way of keeping up to date with news of more general scientific interest, but unless your research community is actively using it to swap interesting papers, it is much less useful for keeping track of more specialized areas. Online reference-management programmes such as Connotea and Mendeley also have the facility to share articles. The website ‘Faculty of 1000’ provides the biomedical community with a place to find papers that other researchers find interesting and an equivalent in the chemistry community would be most useful. It relies on academics identifying and evaluating articles from the literature and can, quite quickly, give an idea of which papers are piquing the interest of their peers.

With a little organization and some useful online tools, the apparently daunting task of keeping up to date can be achieved: like eating an elephant, it has to be done one bite at a time.

You can read full the editorial here (registration is free).

Gavin

Gavin Armstrong (Senior Editor, Nature Chemistry)

Revision notes

[This post is based on the editorial in the December issue — the full text can be accessed here, available for free to all registered users. We welcome feedback on our editorials in the comments section below.]

Revising a manuscript in response to the comments of referees should not be about doing the bare minimum to get a paper published. Addressing criticisms that are genuine and constructive can lead to much more compelling research articles.

An e-mail arrives in your inbox from the journal to which you sent your last research paper and it has a subject title that begins ‘Decision on manuscript xxx’. Your heart leaps as you quickly find the phrase ‘we are pleased to inform you’ in the first paragraph, but then it sinks as you scroll down through the referee reports…and keep scrolling…and keep scrolling…and keep scrolling. You finally reach the end of the e-mail and it seems as though, to answer all of the referees’ queries, you’ll need another three years, two more post-docs and a fresh pot of grant money.

This situation is not uncommon and the process of revising a manuscript has the potential to be a frustrating one — but if authors and referees are prepared to engage in a constructive dialogue (mediated by the editor), then it can be a rewarding experience that results in a much improved paper.

Peer review can — and should — play a significant role in improving not only the presentation, but also the rigour and quality of research reported in articles. A fresh pair of eyes looking over a research paper is likely to spot holes in logic or data that, if filled on revision, could significantly strengthen the conclusions drawn from a study. Aside from flaws, referees can also ask questions or make suggestions that help guide the future direction of a research project.

Armed with a list of suggestions from referees, an author must revise their manuscript and then convince the referees and editor that it is now ready for publication. To help those involved judge the changes made during revision, Nature Chemistry ask that authors go through them point-by-point in a letter written specifically for the referees. Trying to discuss all of the changes in a long-winded essay style can make it more difficult for the editor and referees to follow.

The editors understand that some referees may have unrealistic expectations as to what extra work is required before publication and also that sometimes there are genuinely no right or wrong answers — merely progressive scientific debate. The editors also appreciate that busy authors would prefer to make as few changes as possible, and even though carrying out all of the referees’ suggestions may not be required for publication, all authors are expected to take each technical and scientific concern seriously.

Those authors who choose not to carry out extra experimental work or data analysis as suggested by a referee must provide a compelling argument for why that is the case, convincing the reviewers that their conclusions are fully supported without the additional work. In cases where authors and referees disagree on the revisions required, it is the editor who is responsible for making the final decision.

As a closing comment it is worthwhile remarking that the ‘honesty’ involved in peer-review can sometimes be abrasive and hard to ignore as an author, but we very much advise both authors and referees not to personalise the process. Remaining polite and professional throughout, even if others involved are not, is unquestionably the best option and enables the review process to remain focused on the science.

Taste receptors, chemical kinetics and equilibrium

We recently published an interesting Thesis article by Bruce Gibb called “Life is the variety of spice” and have since received a comment that seeks to extend the ideas originally discussed.

Gavin Armstrong (Associate Editor, Nature Chemistry)


In an insightful article in the January 2010 issue of Nature Chemistry, Bruce Gibb proposed the addition of curry-making to undergraduate organic chemistry labs. Curry-making is a classic example of a practical aspect of chemistry (molecular gastronomy) that laymen tend to ignore. Initially the reagents (spices) are heated (fried) in oil so as to overcome the different activation barriers. After that, water is added to the mixture and boiled, driving the reaction towards equilibrium. There are many rate constants in the first step which is one of the reasons that the ingredients must be heated stepwise at various temperatures. During the final reflux, multiple equilibrium constants are set up. Hence the concentrations of the different spices assume immense importance. A little on the higher side and the curry can become extremely spicy. In fact, the science is delightfully complex and it is astonishing that curry making works more often than not.

While seconding the author’s proposal, we also feel that one should consider adding part of the culinary class to the biochemistry curriculum. The author discusses the essential structural implications of the different curry ingredients and their mutual physico-chemical interactions while it is being prepared. Additionally, one needs to appreciate the biochemical interactions of the curry after consumption and how it’s different ingredients stimulate a diverse array of distinct receptors (often simultaneously) [Gerhold & Bautista, 2009]. Capsaicin, black pepper and garlic all stimulate the TRPV1 receptor [McNamara et al., 2005] whose activation leads to the typical burning sensation. Various oils and cloves stimulate the TRPV3 receptor (highly expressed in the nose) [Xu et al., 2006]. It is the combined downstream effect of these various taste and olfactory receptor stimulations that leads us to appreciate the flavor of curry coupled with the aroma and warmth of cloves.

It is interesting to note that this is not the only way that the TRPV receptors have been put to use in society. In eastern India, plans are underway to equip the police with ‘bhut jolokia’ [Liu & Nair, 2010 and Bosland & Baral, 2007] — the world’s hottest pepper — in an aerosol spray to disperse unruly mobs or immobilize rioters non-violently. Gibb noted that peppers evolved to produce capsaicinoids to ward off herbivores, today these peppers are being used by human intruders as ‘smoke bombs’ to keep wild animals at bay in remote forests.

Culinary science has for a long time been treated as an art by cooks around the world and they are mostly ignorant of the science lurking behind a good recipe. Most programs in gastronomy do little to emphasize its molecular aspects. While good food is certainly aesthetic, it is the chemists who can also appreciate the science behind it. While chemists might not become dedicated cooks, a basic culinary education does have implications in the real world; and trained individuals might cherish the opportunity to apply it in their daily lives and career. Thus there are multiple reasons why we would like to highlight the contribution made by biochemists in understanding culinary science.

Finally, a comprehension of the science behind the curry will certainly make a better cook and good food is a universal healer.

Arnab De [Department of Microbiology and Immunology, Columbia University Medical Center]

Subho Mozumdar [Department of Chemistry, University of Delhi, India]

Rituparna Bose [Department of Geological Sciences, Indiana University, Bloomington; e-mail: ribose@indiana.edu]

ACS San Francisco 2010: Questions time

The meeting is winding down a little and although the week has been good, I’m pleased because I’m ready to come home. These meetings are usually tiring but this one in particular seems to have really taken it out of me. Maybe the jetlag and the 11 hour flight play a role but mostly it’s the long days and that you never really switch off during the week.

This ACS meeting I’ve really struggled to find the time to go to that many presentations so I’ve not really got much to say about the science that has been presented. The ones I have seen have been on the whole good, but people don’t tend to present their latest findings at ACS meetings – unlike in more closed meetings such as the Gordon conferences. So the best way to get the latest info is to just talk to people outside of the presentations. That’s why before I came I organized to catch up with a handful of chemists who I wanted to chat to about various things, like what they were up to in the lab at the minute, their opinion on our first 12 issues (everyone I spoke to likes what we doing – Yay!), what they’ve seen in the literature that’s exciting them etc.

And they always have a load of questions for me too. A few have cropped up in pretty much every conversation this week. Everyone wants to talk about impact factors. I can understand why, because they have (quite ridiculously) become, it seems, the most important metric to measure a journal and therefore a way to measure an academic by noting where they publish. To answer the questions that I keep being asked – we don’t really think about it, we haven’t got one yet, it comes in 2011 and it’s far too early to predict (I jokingly tried to predict it in earlier in the year, extrapolating using 2 (!) weeks of citation data and came up with an impact factor of 140,000! I think I was a little out).

Another question that comes up all the time: do you sit near the editors of the other physical science journals (Materials, Nano, Physics, etc…) and do you discuss papers? Yes, I sit just close enough to Peter Rodgers of Nature Nanotechnology to remind him that the pretty football Arsenal play isn’t going to win them the Premier League, but no, we don’t discuss papers that have been submitted. The journals based in London share the same office and we go for lunch and beers together (not at the same time!) but we are completely editorially independent and, in fact, rivals for certain papers that could fit the scope of several NPG journals.

There are other questions that get asked about our editorial processes and I think we’ll try to answer a few of them over the coming weeks and months on the blog.

I have a few more meetings so I’ll keep track of what I’m asked and keep you posted.

Gavin (Associate Editor, Nature Chemistry)

ACS San Francisco 2010: The week ahead

I’ve arrived safe and sound in sunny San Francisco for the ACS meeting. The journey here was trouble-free and actually quite pleasant. I managed to watch a film, read four manuscripts and see the stunning scenery of Greenland, the Rockies and San Francisco.

The meeting is underway and I spent the morning in a session dedicated to Bob Madix (hopefully I’ll blog more about it later) but no amount of coffee is letting me break through the jetlag wall that I’ve just hit. So I thought I’d skip the afternoon sessions and retire to my hotel room to send some emails and write a quick blog entry.

This is my third ACS meeting in as many years and I think that I’m learning how to cope with them. They’re crazy meetings in that there are thousands of people here, tens of parallel sessions and just a phenomenal amount of chemistry being fired at you from all directions. I do enjoy them but I’d be lying if I said it wasn’t a tough week. My plan for this meeting is to catch up with a few people over lunch and dinner, chat to whoever swings by the Nature stand (booth #1000 – come say hi!) and maybe take in fewer sessions than I usually do i.e. I might miss a session here and there unlike in previous years when I’ve been to all sessions possible. It does feel a little like cutting class though!

I’ve got a very loose plan for which sessions I’m going to attend but the actual planning of where to go is a bit of a nightmare. To help out those of you who are unsure about who you should see speak, I thought I’d mention a few sessions that I think are worth a visit. Firstly, there is a great line-up of speakers in a session on Tuesday that has been organised by a group of students (The Graduate Student Symposium Planning Committee – GSSPC). They’ve organized a symposium called ‘Chemistry and the developing world’ and have some top chemists giving presentations (Angela Belcher, Paul O’Brien and Sean Cutler for example). For more info see here.

Secondly, one of our very recent authors, Randy Goldsmith, is giving a presentation tomorrow and he has the unenviable task of speaking at the same time as the Presidential event. The research he’ll be talking about was published in, and is in fact on the cover of, our current issue and it is very cool (single molecule spectroscopy of a fluorescent protein) so do go along and see him talk.

I’ll also be in the Physical Chemistry Awards Symposium which is always great. This year’s eclectic bunch of presentations sees, amongst other topics, some atmospheric chemistry from Kim Prather, some computational nanoscience from George Schatz and a presentation on the dynamics of electronic excited states from Peter Rossky.

And also don’t miss Nature’s very own Jason Wilde (he’s the boss of my boss!) speaking on Tuesday in a session on the Future of Scholarly Communication.

Make sure you follow our tweets from @naturechemistry and also the Twitter tag #acs_sf during the week!

Gavin (Associate Editor, Nature Chemistry)

More Nobel reflections

In December, we published an editorial called “Questioning chemistry” that discussed the definition of chemistry on the back of the recent awarding of the Nobel prize in chemistry for research into the “structure and function of the ribosome”. It was further discussed here on the Sceptical Chymist.

We have since received a comment from Dr. Paolo Ghigna at the University of Pavia giving his views on the apathy of some chemists to the award. These can be found below and may just spark off a little more debate on the subject.

Gavin Armstrong (Associate Editor, Nature Chemistry)


The Editorial in the December 2009 issue of Nature Chemistry remarked on the apathy of the chemical community for the 2009 Nobel prize.

Of course, such a debate would entail the definition of ‘chemistry’, and the editorial defines chemistry as ‘the study of matter and its transformation’. Although it is true, as the editorial says, that “defining research topics is becoming increasingly difficult”, this definition is really too broad to be effective. On one side, elephants are pieces of matter, and during their lives, they go through transformations; on the other side, neutrinos are also pieces of matter that also transform. But no one would doubt the fact that the study of elephants’ life is pertinent to biology, and that studying neutrino oscillations would be the business of physics.

We are then carried back to the question ‘What is chemistry?’. For sure, chemistry is a way of studying matter but we also have to ask how chemists study what kind of matter. To answer this question, as is implicit in the editorial, we need to think about what is the focal point in chemistry classrooms. The large majority of the chemical community would agree that this is the notion of ‘chemical reaction’: chemists are proud of their chemical intuition, that is the ability of being able to predict how a compound would react even in the absence of detailed kinetic and thermodynamic information.

A further step forward can be made simply by looking at the IUPAC definition of a chemical reaction “a process that results in the interconversion of chemical species” Now, a definition of ‘chemical species’ is required. Looking again at IUPAC one finds that a ‘chemical species’ is “an ensemble of chemically identical molecular entities that can explore the same set of molecular energy levels on the time scale of the experiment. The term is applied equally to a set of chemically identical atomic or molecular structural units in a solid array”. Note how, with this definition, questioning about what is pertinent to chemistry does not involve problems of length scale: any crystal of rock salt belongs to a chemical species, is usually much bigger than a ribosome, and is not a molecular species (this is a point chemists tend to forget. For example, CaF2 was named ‘molecule of the week’ on the ACS website).

Probably, one of the reasons for the debate could be that chemists do not recognize a ribosome as a “chemical species”: a ribosome does not fulfill the IUPAC definition. Or, to look at the flip side of the coin, can we apply our chemical intuition to a ribosome?