UK Royal Society still trails US National Academy of Sciences in female members

Women in science

{credit}NAS/Royal Society{/credit}

The Royal Society — the United Kingdom’s national science academy — today announced that it has elected 50 new fellows, who get to put the prestigious letters ‘FRS’ after their name. Among the array of top scientists this year are UK chief medical officer Sally Davies and climate economist Nicholas Stern. Stephen Chu — Nobel physics laureate and former US Energy Secretary — is one of ten new foreign members.

Just 14% of the new fellows are women, meaning that the Royal Society still lags behind the US National Academy of Sciences (NAS) in terms of female representation. The NAS had 21% women among the 84 newly elected members it announced two days ago, and consistently elects a higher proportion of women than its UK counterpart (see chart).

The Royal Society says that its selection broadly mirrors the proportion of women put forward for membership. Indeed, according to statistics e-mailed to Nature by a spokesperson, women made up 14% of new nominations in 2014 and currently make up 11% of the total pool of candidates for election (once nominated, candidates remain eligible for election for seven years). However, women now make up 17% of UK professors in science, technology, engineering and mathematics (STEM) disciplines, a parliamentary report noted in February. (The Royal Society prefers to quote figures from the UKRC in 2010, which found 9.3% female professors in STEM subjects in full-time employment, and is “the best match for the pool of people who are likely to be elected as Fellows”, it says).

The National Academy of Sciences keeps its election process confidential, but the United States overall has a slightly healthier proportion of women in the senior echelons of science. The US National Science Foundation estimates that as long ago as 2010, women made up 21% of full science professors. (More up-to-date figures specific to the sciences are not available; see Nature’s special issue, ‘Women in Science’ for more details).

The Royal Society is aware of the issue. A spokesperson points out that in 2012–13, the society ran a project called Mobilising Research Fellows to improve the diversity of candidates for fellowship and academy medals, and in particular to improve the pool of female candidates. In 2013 it set up four ‘temporary nominating groups’ to pick out people in areas where the fellowship was under-represented, which included female candidates and industry.

Indeed, the academy found room to elect some industry-oriented fellows this year, including Andrew Mackenzie, the chief executive officer of mining giant BHP Biliton, and Michael Lynch, the computer-science and technology entrepreneur who co-founded the software business Autonomy. (In 2011, Hewlett-Packard (HP) bought Autonomy for more than US$11 billion, but the deal rapidly soured. HP wrote off $8.8 billion from the firm’s value and accused the British firm’s senior management of unlawful accounting and other misrepresentations — allegations still under investigation by financial authorities.)

As for the new NAS members, they include nanoscientist Fraser Stoddart (Scottish-born and already a fellow of the Royal Society, but now at Northwestern University in Illinois), and David Shaw, the former hedge-fund magnate who gave up his financial career more than a decade ago and now uses supercomputers to simulate protein folding. New foreign members include Japanese Nobel chemistry laureate Ei-ichi Negishi and Danish palaeobiologist Eske Willerslev.

How to make graphene in a kitchen blender

Graphene

Atomic resolution, scanning transmission electron microscope image of part of a nanosheet of shear exfoliated graphene. Credit: CRANN/SuperSTEM

Don’t try this at home. No really, don’t: it almost certainly won’t work and you won’t be able to use your kitchen blender for food afterwards. But buried in the supplementary information of a research paper published today is a domestic recipe for producing large quantities of clean flakes of graphene.

The carbon sheets are the world’s thinnest, strongest material;  electrically conductive and flexible; and tipped to transform everything from touchscreen displays to water treatment. Many researchers — including Jonathan Coleman at Trinity College Dublin — have been chasing ways to make large amounts of good-quality graphene flakes.

In Nature Materials, a team led by Coleman (and funded by the UK-based firm Thomas Swan) describe how they took a high-power (400-watt) kitchen blender and added half a litre of water, 10–25 millilitres of detergent and 20–50 grams of graphite powder (found in pencil leads). They turned the machine on for 10–30 minutes. The result, the team reports: a large number of micrometre-sized flakes of graphene, suspended in the water.

Coleman adds, hastily, that the recipe involves a delicate balance of surfactant and graphite, which he has not yet disclosed (this barrier dissuaded me from trying it out; he is preparing a detailed kitchen recipe for later publication). And in his laboratory, centrifuges, electron microscopes and spectrometers were also used to separate out the graphene and test the outcome. In fact, the kitchen-blender recipe was added late in the study as a bit of a gimmick — the main work was done first with an industrial blender (pictured).

Blender

Five litres of suspended graphene (in an industrial blender). Credit: CRANN.

Still, he says, the example shows just how simple his new method is for making graphene in industrial quantities. Thomas Swan has scaled the (patented) process up into a pilot plant and, says commercial director Andy Goodwin, hopes to be making a kilogram of graphene a day by the end of this year, sold as a dried powder and as a liquid dispersion from which it may be sprayed onto other materials.

“It is a significant step forward towards cheap and scalable mass production,” says Andrea Ferrari, an expert on graphene at the University of Cambridge, UK. “The material is of a quality close to the best in the literature, but with production rates apparently hundreds of times higher.”

The quality of the flakes is not as high as that of the ones the winners of the 2010 Nobel Prize in Chemistry, Andre Geim and Kostya Novoselov from Manchester University, famously isolated using Scotch Tape to peel off single sheets from graphite. Nor are they as large as the metre-scale graphene sheets that firms today grow atom by atom from a vapour. But outside of high-end electronics applications, smaller flakes suffice — the real question is how to make lots of them.

Although hundreds of tons of graphene are already being produced each year — and you can easily buy some online — their quality is variable. Many of the flakes in store are full of defects or smothered with chemicals, affecting their conductivity and other properties, and are tens or hundreds of layers thick. “Most of the companies are selling stuff that I wouldn’t even call graphene,” says Coleman.

The blender technique produces small flakes some four or five layers thick on average, but apparently without defects — meaning high electrical conductivity. Coleman thinks the blender induces shear forces in the liquid sufficient to prise off sheets of carbon atoms from the graphite chunks (“as if sliding cards from a deck”, he explains).

Kitchen blenders aren’t the only way to produce reasonably high-quality flakes of graphene. Ferrari still thinks that using ultrasound to rip graphite apart could give better materials in some cases. And Xinliang Feng, from the Max Planck Institute for Polymer Research in Mainz, Germany, says that his recent publication, in the Journal of the American Chemical Society, reports a way to produce higher-quality, fewer-layer graphene at higher rates by electrochemical means. (Coleman points out that Thomas Swan have taken the technique far beyond what is reported in the paper.)

As for applications, “the graphene market isn’t one size fits all”, says Coleman, but the researchers report testing it as the electrode materials in solar cells and batteries. He suggests that the flakes could also be added as a filler into plastic drinks bottles — where their added strength reduces the amount of plastic needed, and their ability to block the passage of gas molecules such as oxygen and carbon dioxide maintains the drink’s shelf life.

In another application altogether, a small amount added to rubber produces a band whose conductivity changes as it stretches — in other words, a sensitive strain sensor. Thomas Swan’s commercial manager, Andy Goodwin, mentions flexible, low-cost electronic displays; graphene flakes have also been suggested for use in desalination plants and even condoms.

In each case, it has yet to be proven that the carbon flakes really outperform other options — but the new discoveries for mass-scale production mean that we should soon find out. At the moment, an array of firms is competing for different market niches, but Coleman predicts a thinning-out as a few production techniques dominate. “There are many companies making and selling graphene now: there will be many fewer in five years’ time,” he says.

European Commission report urges legal reform to help scientists text-mine research papers

European copyright law should change to help researchers use computer programs to extract facts and data from published research papers, legal experts have urged in a report (PDF) for the European Commission published today.

The recommendations come just as the UK government is about to pass laws mandating similar freedoms, and could loosen a restrictive legal environment that, researchers complain, has enabled subscription publishers to tightly control the way information can be harvested from online papers.

Continue reading

UK budget sees boosts for data science, graphene and cell therapy

British scientists already know that their public funding for the next two years is frozen at £4.6 billion (US$7.6 billion) annually (as it has been since 2010, which for the nation’s seven research-grants agencies has meant a 10% cut in real terms over the past three years), so they did not expect anything transformative from today’s budget.

Right on cue, UK chancellor George Osborne continued his trend of throwing small crumbs of funding to science and technology — £222 million additional cash over the next five years — while at the same time failing to announce either long-term support for basic science or a strategy to develop UK industrial research, both of which are sorely needed, say science-policy experts.

The budget “follows the usual pattern,” tweeted Kieron Flanagan, who studies science policy at Manchester Business School, “a few small science and technology announcements given the name ‘institute’ or ‘centre’ to make them seem significant.”

“More than individual funding for ‘announceable’ projects we need a long-term funding pipeline and a strategy for investment in research to instil confidence in the security of our research ecosystem,” added Lesley Yellowlees, the president of the Royal Society of Chemistry.

Osborne said the government would provide £42 million over the next five years for a national institute, named after British computer scientist Alan Turing, which would study ‘big data’.  He also announced £55 million over five years for a centre aimed at large-scale manufacturing of cell therapies for late-stage clinical trials, and £19 million to provide small companies with access to equipment for research and development of products based on graphene, the material for which UK-based researchers Andre Geim and Kostya Novoselov won the 2010 Nobel Prize in physics. (The institutes are ‘Catapult’ centres, which are loosely modelled on Germany’s Fraunhofer Institutes and have the aim of stimulating links between universities and businesses.)

“We should break the habit of a lifetime and commercially develop [graphene] in Britain,” Osborne added. The United Kingdom and Europe have far fewer patents on the material than Asia and the United States, although a 10-year, €1-billion European push to commercialize graphene is bidding to change that, and the United Kingdom has already plunged £38 million into a National Graphene Institute at the University of Manchester.

These three fields — big data, regenerative medicine and graphene — are all areas that Osborne and science minister David Willetts have picked out repeatedly in speeches over the past 18 months as technologies in which the United Kingdom can be world leading.

Osborne also announced an extra £106 million for around 20 additional doctoral training centres — university-based hubs in which PhD students are taught in cohorts and given extra courses in networking, business and industrial development. In the United Kingdom, these centres are rapidly eclipsing conventional project grant PhDs, where students train under the wing of one academic research group.

In the big picture, UK spending on research and development as a proportion of its economy is around 1.7%, well below the European average, although the country punches far above its weight  in terms of top-cited research papers.

“The last four years of a flat cash science budget is biting scientists and engineers and squeezing universities,” said Sarah Main, the director of the London-based Campaign for Science and Engineering. By its calculations, the total research budget — including not just research grants, but also a boost to spending on buildings and facilities — will rise from £5.4 billion in 2010 to £5.9 billion in 2015. (In 2010, the government slashed spending on buildings and facilities, the ‘capital’ part of the science budget, by 40%, a reduction which it has subsequently redressed.)

The government is expected to announce a more comprehensive science and innovation strategy in the autumn — though this might be short-lived, as national elections are set for 2015.

The new dilemma of online peer review: too many places to post?

As online comments on newly published research become widespread, a new dilemma faces scientists wanting to enter the electronic fray: where to comment, and in what format for maximum impact?

That question faced Kenneth Lee, a researcher in regenerative medicine at the Chinese University of Hong Kong, when he wanted to post up his critique of controversial stem-cell research. Lee’s research group has, like many other scientists, tried and failed to replicate the work, published in Nature at the end of January. The studies are now under investigation, with some of their authors calling for retraction.

Lee had his pick of online fora. He could have posted on a closely watched stem-cell blog by researcher Paul Knoepfler at the University of California, Davis, which has been collecting tales of failed replications. He could have posted on PubPeer.com, a website where people can make anonymous comments about published papers, which has also seen large amounts of traffic discussing problems with images used in the studies. He could have posted on PubMed Commons, an initiative launched last October that allows scientists to comment on published abstracts on the PubMed website. He might have chosen any number of other venues — such as the news articles reporting on the controversy — or even his own website.

Instead, Lee picked ResearchGate, a social network that boasts more than 4 million signed-up researchers. And instead of just adding his comment linked to the publication’s page on the site, Lee posted up a structured mini-review, with sections for ‘methodology’, ‘analyses’, ‘references’, ‘findings’ and ‘conclusions’, and including his own images.

This did not happen by accident. ResearchGate’s managers had noticed that Lee was chattering about his replications on their network, and an employee invited him to be the first to try out their new post-publication review format. “I was very reluctant at first, but she said I keep the copyrights, so I reluctantly agreed,” Lee says. “This is how everything came together. I think it is just fate.”

ResearchGate is calling its structured feedback format Open Review, and the co-founder of the site, Ijad Madisch, says that it is a feature he has long wanted to introduce.

“It looks interesting, and I am a supporter of innovative approaches to facilitate discussions among scientists in real time,” says microbiologist Ferric Fang of the University of Washington in Seattle. “A nice thing about the more structured format is that it encourages reviewers to be more systematic and to support their critiques. Short comments are OK but it is easier to make reckless statements in the absence of structure.” Fang adds that in this particular case, “I don’t expect the open review to have much impact on the paper since questions about its validity have already been raised”.

Asked why researchers should post their reviews on ResearchGate — as opposed to any other website — Madisch points out that his site has a community of verified scientists. “The content is free — anyone can read that from outside — but to contribute, you need to be affiliated with an institution that does research, so the quality is high,” he says. “I think Kenneth decided to publish on ResearchGate because he is part of an engaged community there. He wanted to get his replication out fast in order to warn others, and to get feedback on his work — rather than, say, write a letter to the editor, which can come six months after an article is published, and may be completely detached from the study itself. If there is one central place where people go, post-publication peer review becomes more efficient for everyone,” he says.

Will a few hubs such as ResearchGate or Pubpeer.com dominate post-publication peer review? Or will online comments look more like a scattered hodgepodge of reviews, comments and discussions across websites unlinked to original publications? And if so, can search functions tie the thicket together? To these questions, Madisch has a simple answer: “I don’t know where this will end, but I do know it will be really big.”

Lee says he would still like to publish his results in a journal, so that his students get the credit they deserve for their efforts. He says he doesn’t know whether his work posted on ResearchGate could be considered a citable object in itself. “But it has already been cited on the Wall Street Journal, BBC and Boston Globe, so the impact is really far reaching,” he notes.  “The most important thing is that the finding is fairly and accurately reported so that other researchers can decide whether to use their valuable resources to continue pursuing the study.”

Online post-publication peer review, in the fuller sense that Lee has performed it, is unlikely to be common, says Fang. “Given the amount of time it takes to read and carefully review a paper, I suspect that the papers selected for discussion are going to be limited to very high-profile work about which readers have concerns. After all, there are something like a million new papers published each year and the average scientist reads only about 20–25 papers each month,” he says.

Elizabeth Iorns, chief executive of Science Exchange, and an advocate for efforts to reproduce published scientific research, agrees. She points out a subtlety in the way scientists have rushed to replicate the findings. Rather than, like Lee, acting as post-publication reviewers seeking to check the paper, she says, researchers are instead trying to adopt the method for their own laboratories, and so often are not performing exact replications of the original work.

“What we have learned is that researchers don’t generally want to perform confirmatory replication studies of other researchers’ findings,” she says.

Publisher reacts to fake-paper-gate

It’s general practice in research publishing to issue retractions for papers that must be withdrawn. But what to do when the papers in question are not merely flawed science, but utter gibberish generated by a computer programme?

Springer found itself tackling this unusual situation after Nature News revealed on Monday that it had published 16 fake articles as conference proceedings.

Its solution, in a statement today: “We are in the process of taking the papers down as quickly as possible. This means that they will be removed, not retracted, since they are all nonsense. A placeholder notice will be put up once the papers have been removed.”

Springer adds that it is reviewing its procedures to find out what happened. “When flaws are detected by us, or brought to our attention by members of the scientific community, we aim to correct them transparently and as quickly as possible,” the publisher says.

But the US Institute of Electrical and Electronic Engineers (IEEE), which has published more than 100 fake papers in a variety of conference proceedings, took a different route. It wiped article records from its database last year, and again this year, without making that fact clear to subscribers.

In statements made to Nature News last week, the IEEE said: “It was brought to our attention over a year ago that there might have been some conference papers published in our IEEE Xplore digital library that did not meet our quality standards.  We took immediate action to remove those papers, and also refined our processes to prevent papers not meeting our standards from being published in the future.”

The issue first came to light when French computer scientist Cyril Labbé, who detected the fakes, told the IEEE of a batch of 85 nonsense papers in 2012. It subsequently removed them from its database without public comment. Readers attempting to access those articles on the IEEE website reach only a “page not found” notice, with no placeholder statement acknowledging their withdrawal.

Labbé informed the IEEE of a further batch of fake papers in December 2013; but for two months, the publisher left these papers online. After being contacted last week by Nature News, the IEEE removed this second batch of papers from its database. Again, only a “page not found” notice greets curious visitors.

A list of papers the IEEE has removed is posted here: IEEE-wiped-articles.pdf. (Some duplicates can be found elsewhere on the internet).

AAAS announces open-access journal

The publisher of the august journal Science is to launch its first open-access journal in early 2015. The non-profit American Association for the Advancement of Science (AAAS) announced the new online-only broad-discipline journal, to be called Science Advances, today ahead of the AAAS annual meeting in Chicago, Illinois.

Science Advances would aspire to uphold editorial standards on a par with journals such as Nature Communications or PNAS [not open access], in terms of the quality of papers,” says Marcia McNutt, the editor-in-chief of Science.

That distinction suggests that the AAAS intends to place a quality bar on papers being published in its new journal, and that it will not adopt the lucrative model of open-access journals that publish any papers that peer reviewers deem scientifically sound.

The journal will be funded by up-front fees paid per article — a common open-access business model — which AAAS executive publisher and chief executive Alan Leshner says would be “within industry norms and competitive with similar journals”. (Nature Communications charges between US$4,800 and $5,200 for publication). It will also, as Leshner and McNutt explain in an editorial published today in Science, be staffed by researchers, rather than a separate team of professional editors.

Science Advances will enter a very crowded, and fast-growing, open-access publishing market. Almost every major publisher and society now has a thriving open-access business — helped along by national and institutional mandates to publish open-access research, and by the squeeze on library subscription budgets which has meant that new subscription journals find it hard to break into the market.

Some 11% of all research papers were published in immediate-open-access journals in 2013 (according to data from Thomson Reuters’ Web of Science), up from a mere 2% in 2002.

US$21 million awarded to delighted scientists in glitzy ceremony

In a ceremony hosted by actor Kevin Spacey and featuring a live performance from singer Lana Del Ray, six biologists and two physicists took home a combined US$21 million last night at NASA’s Ames Research Center in Mountain View, California. It was the latest tranche of science megaprizes sponsored by philanthropic billionaires.

Since internet entrepreneur Yuri Milner announced last year that he would dwarf the Nobels by giving out $3 million prizes to theoretical physicists, the ‘Breakthrough prizes’ (which now have branched out of fundamental physics and into the life sciences) have given out more than $90 million and gained additional sponsors, including the founders of Google, 23andMe and Facebook. And more prizes are on the way: at the end of the ceremony, Milner announced that next year there would be a new award for mathematics.

In theoretical physics, Michael Green, of the University of Cambridge, UK, and John Schwarz of the California Institute of Technology (Caltech) in Pasadena, shared $3 million for their work on quantum gravity and the unification of forces.

In life sciences, the winners — each of whom received $3 million — were James Allison of the University of Texas MD Anderson Cancer Center in Houston, for work on cancer immunotherapy; Mahlon De Long of Emory University in Atlanta, for work on brain circuits that malfunction in Parkinson’s disease; Michael Hall of the University of Basel in Switzerland for discovering the protein kinase target of rapamycin (TOR); Robert Langer of the Massachusetts Institute of Technology in Cambridge for work on biomaterials and controlled drug-release systems; Richard Lifton of Yale University in New Haven, Connecticut, for work on hypertension; and Alexander Varshavsky of Caltech for work on intracellular protein degradation.

Green told the Guardian that he was “delighted and flattered” to have won. So far, the reaction from some scientists has been ambivalent: delighted at the attention and recognition, but unsure whether big prizes are the best way to promote research. Some of last year’s physics awardees have used their winnings to support funds for postdocs, PhD students and science teachers. 

Patent database of 15 million chemical structures goes public

The internet’s wealth of free chemistry data just got significantly larger. Today, the European Bioinformatics Institute (EBI) has launched a website — www.surechembl.org — that allows anyone to search through 15 million chemical structures, extracted automatically by data-mining software from world patents.

The initiative makes public a 4-terabyte database that until now had been sold on a commercial basis by a software firm, SureChem, which is folding. SureChem has agreed to transfer its information over to the EBI — and to allow the institute to use its software to continue extracting data from patents.

“It is the first time a world patent chemistry collection has been made publicly available, marking a significant advance in open data for use in drug discovery,” says a statement from Digital Science — the company that owned SureChem, and which itself is owned by Macmillan Publishers, the parent company of Nature Publishing Group.

Under the agreement, Digital Science retains use of the SureChem software; the company is being wound up because Macmillan wants to focus on serving researchers, not commercial clients such as drug firms, says SureChem’s co-founder, Nicko Goncharoff.

“We are delighted to take on the stewardship of this resource,” says John Overington, head of computational chemical biology at the EBI, which is part of the European Molecular Biology Laboratory in Hinxton, UK. “Scientists are accustomed to doing literature searches, but the patent literature is often where the real gems lie — especially in translational science,” he adds. Published papers lag the patent literature by about two years, he points out.

Overington says that the EBI plans to interlock information on chemical compounds from different public resources. For example, a search on a compound such as Pfizer’s Viagra (sildenafil) will reveal its presence in patents (from SureChemBL), as well as its interactions with potential protein drug targets (from databases such as the EBI’s ChemBL, which catalogues experiments done on compounds).

Later, Overington hopes to apply SureChem software to extract structures mentioned in research papers, starting with open-access papers held in repositories such as Europe PubMed Central. But, he adds, reconstruction of chemical data from papers is harder, because structures are often not named or pictured explicitly, but only alluded to as variants on a common molecular skeleton.

Historically, chemists have not had a wealth of free online data, and have been used to paying to get information from private databases. SureChem released data on 10 million molecules into the public database PubChem last year, but the information was restricted (as the information on links to patents could only downloaded one molecule at a time). But the web’s resources of searchable public chemical data are fast expanding. “I think it’s a really exciting time for chemistry,” Overington says.

Researchers push for more funding as dementia cases rise

The number of people living with dementia around the world is now estimated at 44 million, or up 22% from three years ago, according to a report released today by Alzheimer’s Disease International (ADI), a federation of Alzheimer’s associations around the world.

The increase on the ADI’s previous finding is due at least in part to improved reporting of dementia prevalence in China and sub-Saharan Africa. And as people live longer, cases of dementia — a catch-all term describing the loss of memory, mental agility and understanding owing to neurodegenerative diseases such as Alzheimer’s — will rise to 76 million by 2030 and to 136 million by 2050, the ADI report says. “The current burden and future impact of the dementia epidemic has been underestimated,” it concludes.

The report ratchets up the pressure on funders to invest more into tackling dementia ahead of an 11 December summit in London, at which the World Health Organization and ministers from the G8 (Group of Eight) countries will discuss a global action plan on the condition.

“This is a once-in-a-generation opportunity to turn the tide on dementia,” Doug Brown, director of research and development at the Alzheimer’s Society, a charity based in London, told reporters at a briefing yesterday. “We need as much investment in dementia research as we have in cancer,” he said.

Indeed, despite well-publicized political commitments — the United Kingdom’s prime minister David Cameron launched a ‘dementia challenge’ in March 2012, and the US government set out plans for extra Alzheimer’s funding in May 2012 — levels of funding remain low.

In the United Kingdom, for example, dementia costs the economy £23 billion a year (though mostly not in front-line medical expenses), the Alzheimer’s Society estimates — which is twice the burden of cancer. But public research funding only amounts to some £60 million a year, and that is barely one-eighth of what is spent on cancer research. The problem is similar around the world, Brown says.

Nick Fox, a neurologist who heads the Dementia Research Centre at University College London, says, more conservatively, that he hopes the G8 will double dementia funding in the next five years.

Drugs designed to fight Alzheimer’s disease have proved disappointing in clinical trials so far. But, says Fox, “some of the trials have been like trying chemotherapy for cancer when the patient is already in a care hospice,” given that Alzheimer’s starts to attack the brain up to a decade before symptoms such as memory loss appear.

In a new approach, at least four clinical trials are now planning to treat people who have not yet developed Alzheimer’s symptoms. One is a five-year trial of an antibody, crenezumab, which binds to fragments of neuron-damaging amyloid-β. The drug will be tested in people who carry a rare genetic mutation that makes them certain to get the disease. Another, the Dominantly Inherited Alzheimer’s Network study, will enrol patients with a possible familial risk for Alzheimer’s; a third, by companies Takeda (based in Osaka, Japan) and Zinfandel Pharmaceuticals (based on Durham, North Carolina), hopes to test an experimental drug in people whose genetic makeup suggests elevated risk of Alzheimer’s; and a fourth, known as the A4 study, will treat people who show biomarker evidence of amyloid plaques in positron-emission tomography.

The ADI report adds that better care and timely diagnoses are important, too. And dementia is not just a disease of the well-off: though cases are concentrated in the richest and most demographically aged countries, 63% of people with dementia live in low- and middle-income countries where there is limited access to social services and support.