Roman Egypt was home to “a good citizenship” youth organisation 2,000 years ago

Ancient Egypt GOODSHOOT

{credit}© GOODSHOOT{/credit}

Following a study of over 7,500 ancient documents on papyrus, originating from Oxyrhynchos in Egypt and discovered over a hundred years back in a rubbish dump, University of Oslo and the University of Newcastle presented what is perhaps the most systematic research of childhood in Roman Egypt, according to the university’s website.

Among their discoveries? Some 2,000 years ago, Oxyrhynchos, a town of around 25,000 inhabitants, had a youth organisation, called a “gymnasium,” in which any free-born child could enroll – slaves and girls not allowed.

Somewhere between 10 and 25% of local Egyptian boys, in addition to Greek and Roman residents of Egypt would have qualified, but typically members of affluent families and higher tax classes enrolled, according to an overview of the study released earlier this month by social historian Ville Vuolanto of the University of Oslo and April Pudsey of the University of Newcastle.

Enrollment in the gymnasium marked the transition to adulthood.

“It’s like putting together a jigsaw puzzle. By examining papyri, pottery fragments with writing on, toys and other objects, we are trying to form a picture of how children lived in Roman Egypt,” explains Vuolanto.

While well-off boys were part of the prestigious gymnasium, learning to be good citizens, others worked or landed what is termed “apprenticeship contracts,” mainly in the weaving industry. Either way, boys in ancient Egypt were not considered fully adult until they got married, usually in their early twenties. Most girls remained or worked at home, according to the study.

Slave children could also become apprentices, however, unlike “free-born” citizens they lived with their owners or “masters” not their parents during. Vuolanto says that children as young as two were separated from kin and sold as slaves.

“Little is known about the lives of children until they turn up in official documents, which is usually not before they are in their early teens,” says University of Oslo’s press release.

Lost in Translation — Chasing the Roots of Conditioned Fear Research

I’m currently attending the Winter Conference on Neural Plasticity in lovely St. Kitts & Nevis and I’ll be tweeting when I can from #wcnp12 when the Internet access in the room decides to cooperate.

Today’s opening session at the meeting was a historical perspective on selected topics in neural plasticity. I thought I’d share an interesting piece of history about one topic that has exploded in terms of research output over the last 20 years: conditioned fear. Michael Fanselow gave the lecture on the history of fear research and focused on the era prior to the exponential growth of the literature, sticking to 1920-1980. Here’s a graph from a very recent review simply noting the number of “fear extinction” papers in the literature (one small sub-field in this topic,) just to give you a sense of how rapidly this field has grown:

Found on Google Images, not sure why it's in front of the paywall!!

I’ll do may best to channel Dr. Fanselow with the next few paragraphs:

Continue reading

The periodic table: matter matters

Cross-posted with permission of OUPblog.

eric.bmp

Eric Scerri is a chemist and philosopher of science, author and speaker. He is a lecturer in chemistry, as well as history and philosophy of science, at UCLA in Los Angeles. He is the author of several books including The Periodic Table, Its Story and Its Significance, Collected Papers on the Philosophy of Chemistry and Selected Papers on the Periodic Table. His latest book, The Periodic Table: A Very Short Introduction, is published this week.

As far back as I can remember, I have always liked sorting and classifying things. As a boy I was an avid stamp collector. I would sort my stamps into countries, particular sets, then arrange them in order of increasing monetary value shown on the face of the stamp. I would go to great lengths to select the best possible copy of any stamp that I had several versions of. It’s not altogether surprising that I have therefore ended up doing research and writing books on what is perhaps the finest example of a scientific system of classification – the periodic table of the elements. Following degrees in chemistry I wrote a PhD thesis in the history and philosophy of science and specialised in the question of whether chemistry has been explained by quantum mechanics. A large part of this work dealt with the periodic table, the explanation of which is considered as one of the major triumphs of quantum theory, and the notion of atomic orbitals.

As I often mention in public lectures, it is curious that the great 20th century physicist, Ernest Rutherford, looked down on chemistry and compared it to stamp collecting. But we chemists had the last laugh since Rutherford was awarded the Nobel Prize for chemistry and not for his beloved field of physics.

periodictable.jpg

In 2007 I published a book called The Periodic Table, Its Story and Its Significance, which people tell me has become the definitive book on the subject. More recently I was asked to write a Very Short Introduction to the subject, which I have now completed. Although I first thought this would be a relatively easy matter it turned out not to be. I had to rethink almost everything contained in the earlier book, respond to comments from reviewers and had to deal with some new areas which I had not developed fully enough in the earlier book. One of these areas is the exploration of elements beyond uranium or element number 92, all of which are of a synthetic nature.

At the same time, there has been a veritable explosion of interest in the elements and the periodic table especially in the popular imagination. There have been i-Pad applications, YouTube videos, two highly successful popular books, people singing Tom Leher’s element song in various settings as well as artists and advertisers helping themselves to the elegance and beauty of the periodic table. On the scientific side, elements continue to be discovered or more precisely synthesised and there are official deliberations concerning how the recently discovered elements should be named.

table.jpg

On November 4th The International Union for Pure and Applied Physics (IUPAP) officially announced that elements 110, 111 and 112 are to be known officially as darmstadtium (Ds), roentgenium (Rg) and copernicium (Cn). The names come from the German city of Darmstadt where several new elements have been artificially created; Wilhelm Konrad Roentgenm, the discoverer of X-rays; and the astronomer Nicholas Copernicus who was one of the first to propose the heliocentric model of the solar system. Of the three names it is the last one that has caused the most controversy. Apart from honouring a great scientist it was chosen because the structure of the atom broadly speaking resembles that of a miniature solar system in which the nucleus plays the role of the sun and the electrons behave as the planets do, an idea that originated with the work of Rutherford incidentally. Except that electrons don’t quite orbit the nucleus. One of the major discoveries to emerge from the application of quantum mechanics to the study of the atom has been the realisation that electrons do not follow planetary-like orbits around the nucleus. The electrons behave as much as diffuse waves as they do as particles, and as such they exist everywhere at once within so-called orbitals. The change in wording from ‘orbit’ to ‘orbital’ is a little unfortunate since it does not begin to convey the enormous conceptual change from Rutherford’s solar system model to the quantum model.

Another controversial aspect of all the synthetic elements, that lie beyond the old boundaries of the periodic table, or elements 1 to 92, is that they are radioactive and mostly very short lived which leads most people to think that making them is an enormous waste of time and resources. But such a view is a little short sighted. Some of these elements have found important applications. Take element 95 or americium for example. It has managed to find its way into every modern household as a vital component of smoke detectors.

Or consider the element technetium, which has a far lower atomic number of 43 but which was first discovered in Palermo, Sicily in 1937 after being artificially created in a cyclotron machine in Berkeley, California. Over the subsequent years technetium has found its way into every major hospital in the world and is used in a plethora of medical scanning procedures as well as for treating various medical conditions. It was later found that technetium occurs naturally on earth but in absolutely minute amounts. This happens because technetium is a bi-product of the natural decay of uranium and also because it is a bi-product in the operation of nuclear reactors. The second of these sources provides macroscopic amounts of technetium, which allow scientists to study the chemistry of the element in great detail and to make many new and medically useful compounds. There have been entire conferences devoted to the chemistry and uses of technetium.

Nobody has yet found the means of producing macroscopic amounts of the most recently named elements, and they probably won’t, but their formation provides chemists and physicists with an excellent opportunity to refine theories on nuclei and atoms and new techniques with which to experiment upon them. Almost of matter is made of the elements and that’s why the elements really matter to us, even the more exotic ones.

The Witches’ Sabbath

Manjit Kumar.JPGThis week’s guest blogger is Manjit Kumar. Manjit’s book, Quantum: Einstein, Bohr and the Great Debate,is about the nature of reality, and was shortlisted for the 2009 BBC Samuel Johnson Prize for Non-fiction. He writes and reviews regularly for a variety of publications, including The Guardian, The Independent, The Times and the New Scientist. He used to edit a journal called Prometheus that covers the arts and sciences, and he was also the consulting science editor at UK Wired.

The first Solvay Conference on Physics, held in Brussels

solvay.jpg

Left-to right standing – Robert Goldschmidt, Max Planck, Heinrich Rubens, Arnold Sommerfeld, Frederick Lindemann, Maurice de Broglie, Martin Knudsen, Fritz Hasenöhrl, Georges Hostelet, Edouard Herzen, James Hopwood Jeans, Ernest Rutherford, Heike Kamerlingh Onnes, Albert Einstein, Paul Langevin. Seated – Walther Nernst, Marcel Brillouin, Ernest Solvay, Hendrik Lorentz, Emil Warburg, Jean-Baptiste Perrin (reading), Wilhelm Wien (upright), Marie Curie, Henri Poincaré.

In June 1911 Albert Einstein was a professor of physics in Prague when he received a letter and an invitation from a wealthy Belgium industrialist. Ernst Solvay, who had made a substantial fortune by revolutionizing the manufacture of sodium carbonate, offered to pay him one thousand francs if he agreed to attend a ‘Scientific Congress’ to be held in Brussels from 29 October to 4 November. He would be one of a select group of twenty-two physicists from Holland, France, England, Germany, Austria, and Denmark being convened to discuss ‘current questions concerning the molecular and kinetic theories’. Max Planck, Ernest Rutherford, Henri Poincare, Hendrik Lorentz and Marie Curie were among those invited. It was the first international meeting devoted to a specific agenda in contemporary physics: the quantum.

Planck and Einstein were among the eight asked to prepare reports on a particular topic. To be written in French, German, or English they were to be sent out to the participants before the meeting and serve as the starting point for discussion during the planned sessions. Planck would discuss his blackbody radiation theory, while Einstein had been assigned his quantum theory of specific heat. Accorded the honour of giving the final talk, there was no room on the proposed agenda for a discussion of his light-quanta – better known these days as photons.

‘I find the whole undertaking extremely attractive,’ Einstein wrote to Walter Nernst, ‘and there is little doubt in my mind that you are its heart and soul.’ Nernst with his love of motorcars was more flamboyant than the staid Planck, but was just as highly respected – in 1920 he was awarded the Nobel Prize for chemistry for what became known as the third law of thermodynamics. A decade earlier, in 1910 he was convinced that the time was ripe to launch a cooperative effort to try and get to grips with the quantum he saw as nothing more than a ‘rule with most curious, indeed grotesque properties’. Nernst put the idea to Planck who replied that such ‘a conference will be more successful if you wait until more factual material is available’. Planck argued that ‘a conscious need for reform, which would motivate’ scientists to attend the congress was shared by ‘hardly half of the participants’ envisaged by Nernst. Planck was sceptical that the ‘older’ generation would attend or would ‘ever be enthusiastic’. He advised: ‘Let one or even better two years pass by, and then it will be evident that the gap in theory which now starts to split open will widen more and more, and eventually those still remote will be sucked into it. I do not believe that one can hasten such processes significantly, the thing must and will take its course; and if you then initiate such a conference, a hundred times more eyes will be turned to it and, more importantly, it will take place, which I doubt for the present.’

Undeterred by Planck’s response, Nernst convinced Solvay to finance the conference. Interested in physics, and hoping to address the delegates about his own ideas on matter and energy, Solvay spared no expense as he booked the Hotel Metropole. In its luxurious surrounding, with all their needs catered for, Einstein and colleagues spent five days talking about the quantum and, as Lorentz said in his opening remarks, the reasons why the ‘old theories do not have the power to penetrate the darkness that surrounds us on all sides’. However, he continued, that the ‘beautiful hypothesis of the energy elements, which was first formulated by Planck and then extended to many domains by Einstein, Nernst, and others’ had opened unexpected perspectives, and ‘even those who regard it with a certain misgiving must recognize its importance and fruitfulness.’

‘We all agree that the so-called quantum theory of today, although a useful device, is not a theory in the usual sense of the word, in any case not a theory that can be developed coherently at present,’ said Einstein. ‘On the other hand, it has been shown that classical mechanics…cannot be considered a generally useful scheme for the theoretical representation of all physical phenomena.’ Whatever slim hopes he abhorred for progress at what he called ‘the Witches’ Sabbath’, Einstein returned to Prague disappointed at having learnt nothing new. ‘The h-disease looks ever more hopeless,’ he wrote to Lorentz after the conference.

Nevertheless, Einstein had enjoyed getting to know some of the other ‘witches’. Marie Curie, whom he found to be ‘unpretentious’, appreciated ‘the clearness of his mind, the shrewdness with which he marshalled his facts and the depth of his knowledge’. During the congress it was announced that she had been awarded the Nobel Prize for chemistry. She had become the first scientist to win two, having already won the Physics prize in 1903. It was a tremendous achievement that was overshadowed by the scandal that broke around her during the congress. The French press had learned that she was having an affair with a married French physicist. Paul Langevin was another delegate at the congress and the papers were full of stories that the pair had eloped. Einstein, who had seen no signs of a special relationship between the two, dismissed the newspaper reports as rubbish. Despite her ‘sparkling intelligence’, he thought Curie was ‘not attractive enough to represent a danger to anyone’.

The Solvay Congress was the end of the beginning for the quantum. It dawned on physicists that it was here to stay and they were still struggling to learn how to live with it. When the proceedings of the conference were published it brought to the attention of others, not yet aware or engaged in the struggle, what an immense challenge it was to successfully do so. The quantum would be the focus of attention at the fifth Solvay conference in 1927. What happened in the intervening years is, as they say, history.

The Living Dinosaur

This post was originally published in Harvard Magazine, the alumni magazine of Harvard University.

jillJonnes-portrait.jpeg

This week’s guest blogger is historian Jill Jonnes, author of Eiffel’s Tower, Conquering Gotham, and Empires of Light. She is a scholar this fall at the Woodrow Wilson International Center for Scholars, working on trees as green infrastructure.

IN EARLY OCTOBER 1989, Peter Del Tredici of Harvard’s Arnold Arboretum was high on the slopes of Tian Mu Mountain Nature Reserve in western Zhejiang Province, counting ginkgo trees with two Chinese collaborators. For 1,500 years, visiting pilgrims had journeyed to this sacred mountain, where Buddhist monks in the late thirteenth century built the famous Kaishan Temple, the largest of many picturesque shrines scattered about the steep hillsides. In the cool fall weather, wrote Del Tredici, then 43, “we walked all the paths and trails in the reserve and measured and mapped the locations of all the ginkgos that we could locate. Ginkgo leaves were turning yellow, making it easy to locate the trees even at some distance.” All told, they found “167 spontaneously growing Ginkgos.” In the world of trees and botany, the finding of wild ginkgos was big news.

ginko.bmp

The Ginkgo biloba is one of the wonders of the natural world, a “living fossil” whose arboreal ancestors date back to the Jurassic period. “How or why the ginkgo managed to survive when all of its relatives went extinct is an unsolved botanical mystery,” wrote Del Tredici in Horticulture back in 1983—a mystery he would spend two decades helping to partially unravel. The term “living fossil” was coined by Darwin; in Del Tredici’s words, it refers to a living species “with a long evolutionary history and no close living relatives.” An average plant species may have an evolutionary run of a few million years; Ginkgo biloba has been around, with minimal changes, for about 56 million years.

Sharing the earth with dinosaurs, the ginkgos—often a dominant forest species—grew across the Northern Hemisphere along disturbed stream beds and levees. Then, about seven million years ago, the glaciers pushed out the last of the ginkgos in America; two million years ago, the ice pushed out the last of the ginkgos in Europe. Ultimately, Ginkgo biloba survived only in Asia.

Today, the dinosaurs are long since extinct but the ginkgo, thanks to gardeners and urban foresters, has recolonized the very continents where it once thrived, a ubiquitous, super-hardy city-tree species. Also known as the maidenhair tree, it has long been admired for its distinctive, elegant, fan-shaped leaves, and valued for its delicate nuts—but it is infamous, too, for the foul odor of its fruits, whose “fleshy outer covering [the sarcotesta],” noted Arboretum founder Charles Sprague Sargent in 1877, “exhales an extremely disagreeable smell of rancid butter.” (Others describe it as “vomitous.”) Having long outlived the pests and diseases that may have afflicted it, a ginkgo is young at 100, when most other street trees have long since died of old age or disease. This is an amazing botanical conquest and comeback.

In the late nineteenth century, when Western plant explorers descended upon China and Japan seeking botanical treasure, they were amazed at the size and antiquity of certain ginkgos: 100-foot-tall trees with 50-foot girths that were 1,000 or even 2,000 years old, growing around temples and monasteries. One of those plant men was collector Ernest H. “Chinese” Wilson, whose two China expeditions from 1907 to 1911 amassed 65,000 botanical specimens for Harvard’s arboretum. (Artfully laid out on 265 acres in Jamaica Plain, the arboretum was conceived in 1872 as both a Boston public park and a Harvard research institution, where the “Living Collections” would serve as a “Tree Museum” and a research resource. Harvard purchased the land for the arboretum and then donated it to the city of Boston, which constructed the park and leased it back to the University for a thousand years for $1 a year.)

In 1930, not long before Wilson’s death in a car accident in Worcester, this legendary botanical explorer declared in no uncertain terms that the ginkgo “no longer exists [in Asia] in a wild state, and there is no authentic record of its ever having been seen growing spontaneously. Travelers of repute of many nationalities have searched for it far and wide in the Orient but none has succeeded in solving the secret of its home….In Japan, Korea, southern Manchuria, and in China proper it is known as a planted tree only, and usually in association with religious buildings, palaces, tombs, and old historic or geomantic sites….What caused its disappearance [in the wild] we shall never know.” Such was Wilson’s clout, reported Del Tredici, that this romantic story of venerable monks preserving this ancient tree “had become dogma.” In 1967 a professor wrote in Science, “It is doubtful, however, whether a natural stand of ginkgo trees is to be found anywhere in the world today.”

Wandering the woods of Tian Mu more than two decades later, Del Tredici, who is today a senior research scientist at the arboretum, believed he had found the elusive and long-sought wild ginkgos. Locating them could help address some of the tree’s evolutionary mysteries. For Del Tredici, the ginkgo offered botanists “a unique window on the past—sort of like having a living dinosaur available to study.” He hoped to learn how this amazing species had managed to survive in the wild since the dinosaurs. How had some ginkgos lived more than a thousand years when few tree species live even hundreds of years? What served as the dispersal agent for its seeds? And what evolutionary purpose caused their fruits to smell so god-awful?

gin 2.bmp

THE 600 SPECIES of trees that grow in temperate North America today fall into three divisions: Pinophyta, which includes all the hundreds of conifers, or cone-bearing seed plants; Magnoliophyta, including the hundreds of broadleaf trees, whose reproduction is tied to their flowers and fruits; and Ginkgophyta, which includes only one tree, Ginkgo biloba, with a reproductive system unlike that of other trees. Although the fact that ginkgo trees are either male or female is not unusual in the tree world, this gender distinction is considered evolutionarily primitive.

“The order to which the tree belongs, the Ginkgoales,” wrote Del Tredici in Arnoldia, “can be traced back to the Permian era, almost 250 million years ago,” thanks to the study of many ginkgo fossils found in the Northern Hemisphere. “The genus Ginkgo made its first appearance in the middle Jurassic period, 170 million years ago….At least four different species of Ginkgo coexisted with the dinosaurs during the Lower Cretaceous.” One of the four species, G. adiantoides, possessed leaves and female ovules that are similar to, but smaller than, those of G. biloba, the species that exists today. In short, the ginkgo has probably existed on earth longer than any other tree now living.

The first ginkgo to grow in Europe after the Ice Age was raised from seed brought from Japan around 1730 by German physician-botanist Engelbert Kaempfer. Planted at the Botanic Garden in Utrecht, Holland, that ginkgo (which thrives to this day) was viewed simply as another rare and exotic tree from the land of the shoguns. In the ensuing decades, botanists at Kew Gardens in England, the Botanic Garden in Montpelier, France, and elsewhere on the continent planted their own rare specimens. In 1784, Philadelphian William Hamilton was delighted to be the first in his young nation to have one of these “Oriental” trees on his Woodlands estate. Naturalist William Bartram planted one nearby in his garden. Today it is the oldest ginkgo in America. But until 1896, botanists, who knew ginkgos were ancient thanks to fossilized specimens, had no idea just how old Ginkgo biloba was.

That year, on September 9 in Tokyo, Japanese botanist Sakugoro Hirase peered through his microscope at the inside of a female ginkgo tree’s ovule. The previous spring, a male ginkgo’s pollen had wafted on the wind toward a female ginkgo with many dangling pairs of round ovules. On the tip of an ovule, a secreted drop of gooey fluid captured and absorbed the pollen into an interior pollen chamber. The pollen had grown all through the summer and, as Hirase was astounded to observe, it had become a multiflagellated ginkgo sperm (three times larger than human sperm) that was swimming to fertilize a waiting egg cell.

“This was really momentous,” according to Del Tredici. “The discovery of motile sperm captured people’s attention. From the scientific point of view, motile sperm was considered to be a trait associated with evolutionarily primitive, non-seed plants such as mosses and ferns. And yet here was the ginkgo tree—clearly a seed-producing plant—with its motile sperm that linked non-seed plants to the more evolutionarily advanced conifers and angiosperms with pollen tubes and non-motile sperm. People realized, ‘My God! Ginkgo is a missing link—a living fossil.’ ”

The ginkgo tree has the same archaic reproductive system as the cycads, which predate the dinosaurs. It takes about 133 days for the ginkgo pollen to develop into sperm that then flails its way to the egg and creates a growing embryo. Soon thereafter, in the fall, the fleshy seeds, containing a hard-shelled nut with a tiny embryo, drop to the ground. Not until the next spring will the seeds germinate. Ginkgo fossils showed that the tree’s reproductive system has been largely unchanged since the Cretaceous. This “direct link with ancient fossil plants,” from before the age of flowering plants, wrote Del Tredici, “gives the modern Ginkgo biloba a pedigree unmatched by any living tree.” Thus Ginkgo was catapulted to a new status of “living fossil”—but a fossil, it was believed, that had survived only through human cultivation, whether for its delicious nuts or its status as a revered “elder.”

When Del Tredici began stalking the wild ginkgo in China in 1989, he was resuming a plant-hunting tradition at the Arnold Arboretum that had ended when “the Bamboo Curtain came down in 1949.” He worked with Nanjing Botanical Garden director Yang Guang and Chinese forester Ling Hsieh. What was hard to ignore as the three men located and measured the golden-leaved ginkgos on Tian Mu Mountain was the paucity of young trees. “Clearly,” wrote Del Tredici, “the Ginkgo population was not actively reproducing from seed under the shady, mature forest conditions that currently prevail on the mountain.” Then they learned that the local populace (and the red-bellied squirrels) had already played “an important factor limiting seedling establishment”: they had collected most of the foul-smelling fruits for the seed-kernel inside. In fact, many Chinese farmers had established ginkgo orchards in order to harvest these nuts as a cash crop.

gin 3.bmp

But Del Tredici did observe something exciting and unfamiliar on Tian Mu: “[M]ost of the larger Ginkgos were reproducing vigorously from suckers arising near the base of their trunks….Wherever the base of the trunk of a large Ginkgo came into direct contact with a large rock or where its base was exposed by erosion, these structures developed…When these growths reach friable soil, they produce lateral roots, develop vigorous growing shoots, and continue their downward growth.” Where conditions were disturbed or tough, ginkgos responded by sending up new shoots from their roots that began growing into new trees. As a result, many old ginkgos have multiple trunks.

Very old ginkgos had long been observed to grow “air roots” from their upper branches. These were known in Japan as chichi (nipples, or breasts), harking back to a Japanese folk tale about an ancient ginkgo in Sendai that grew over the tomb of an emperor’s wet nurse, who vowed to Buddha that mothers who failed to lactate could pray there and would then be able to nurse their babies. Del Tredici was not seeing the aerial “breasts,” but basal chichi (lignotubers). “They had never before been described in the English literature,” he says. This helped explain how ginkgos could live so many millennia. Not only had they outlived pests and diseases, but they resprouted when under stress.

“Going to China was really a leap of faith, but that’s what science is all about,” said Del Tredici during a recent conversation in his arboretum office—an airy space of exposed brick walls, large windows overlooking many trees, two desks and two computers, his collection of old herbal medicine bottles, drawings and photos of ginkgos, and bookcases crammed with titles like Design in Nature: Learning from Trees. “When I came back I did experiments on reproduction and morphology in the lab and the greenhouse on this survival mechanism that ginkgo had evolved.” In the greenhouse, he was able to demonstrate that "basal chichi develop from suppressed cotyledonary [embryonic leaf] buds.

“To my great relief, on that first trip to China,” he said, “I found and explained the ability of ginkgos to survive so long. Even though their sexual reproduction system is archaic and doesn’t work all that well, the tree has this ability to resprout. I call it ecological immortality. Ginkgo became my case study for integrating ecological knowledge with botanical knowledge with horticultural knowledge. I was able to bring all these pieces together into a unified picture.” He was well launched on helping to unravel some of Ginkgo’s evolutionary mystery. The basal chichi helped explain the persistence of the species into the modern era and the extraordinary age of individual trees. Del Tredici’s discovery established a mechanism that has allowed this “living fossil” to survive in the wild in the face of massive ecological change.

DEL TREDICI’S PASSION for ginkgos advanced in fits and starts. A native Californian from Marin County, one of his distinct childhood memories is of 10 ginkgos planted across a neighbor’s front yard. “The thing about ginkgos,” in his view, “is you can be totally illiterate about trees and you still know what a ginkgo is.” With a B.S. in zoology from the University of California at Berkeley, and an M.S. in biology from the University of Oregon, he came East to be with his girlfriend (and later, wife) while she finished Radcliffe College.

After five years at the Harvard Forest greenhouses, running what is now the Torrey Research Lab, he joined the arboretum in 1979 as an assistant plant propagator. “I was working on Sargent’s Weeping Hemlock, an old Victorian plant with a mysterious history,” he said. “I started visiting old estates and inevitably there would be these old ginkgos—100, 200 years old. So I ended up writing this article about old ginkgos.” The arboreal infatuation was heating up. Then Del Tredici discovered that just a few years earlier, in 1977, the Boston Common had lost Gardiner Greene’s ginkgo, an eighteenth-century tree so beloved it had been moved at great expense, when already 40 feet tall, from Beacon Hill to the Common in 1835.

“Believing that it is sometimes good to repeat history,” wrote Del Tredici, “I thought it would be nice to get a public-spirited Bostonian to donate a 40-foot male ginkgo [no smelly fruits!]…to fill the empty space where the tree had been.” On Arbor Day 1982, he and like-minded citizens welcomed the ginkgo to its new home. “It’s been my comeuppance,” he said ruefully of this romantic episode. “I visualized this beautiful ginkgo. Thirty years later and it’s maybe five feet taller. The site conditions are really difficult—compacted soil, on a slope, some extreme drought conditions.”

“In 1985, I had just turned 40,” said Del Tredici, “and felt I needed a new strategy, because I was getting too old to make a living with my back in the greenhouses.” He enrolled in a Ph.D. program at Boston University the next year, intending to write about black cherries. This turned out to be a somewhat more complicated subject than anticipated and one of his committee members, Lynn Margulis, impressed by a paper he had written for her evolution class about the dispersal of ginkgo seeds, suggested, “ Why don’t you do your dissertation on Ginkgo?”

“A light bulb went off,” recalled Del Tredici. “Ginkgos. Probe every little evolutionary detail and you find something unique.” At that time, many posited that dinosaurs ate ginkgo fruits and excreted the seeds, and the beasts’ demise partly explained the disappearance of wild ginkgo—but no dinosaur droppings with ginkgo seeds had ever been found.

In 1988, not long after that Ph.D. light bulb went off, Del Tredici happened to read in the Harvard Gazette that Emery professor of organic chemistry Elias J. Corey (who soon thereafter won the Nobel Prize) had just isolated a compound—ginkgolide B—that might have a medical aspect. He decided on a lark to call Corey, who said, “Come on over.” “I told him I was working on Ginkgo,” Del Tredici continued, "and that I thought it probably existed in the wild, but my question was: ‘What ecological role did Ginkgo play? How had the species survived so many millions of years? What would it look like as a wild plant? Is it a pioneer species?’ I wanted to go to China, but I didn’t know what I would find. Despite what Wilson said, there were plant hunters—including Chinese botanists—who had reported it in remote valleys, little wild remnants.

“Corey said, ‘That sounds like a great idea.’ He was working with a French pharmaceutical company that was providing ginkgo leaves for him to work on. He said, ‘Write your letter describing your project and I’ll write one in support and we’ll put them in the mail at the same time.’ In a month or so, I had a check for $5,000. That was a lot of money in those days. All the French wanted was that I write a book chapter for them.”

While working on Tian Mu in 1989, Del Tredici was persuaded he was seeing wild ginkgos because the trees were mixed in with the natural forest, the sex ratios were normal (half female, half male), and the trees were single or multistemmed and looked as if they had grown from seed. Equally exciting was his discovery of basal chichi.

And then there was the mystery of the stinky fruits. On that trip to China, he learned that local nocturnal scavengers and carnivores like Chinese leopard cats and the masked palm civet ate the ginkgo’s fruit. He hypothesized that the stinky flesh mimicked the smell of rotting meat, a successful strategy to attract these creatures. The ginkgo nuts, in turn, were eventually excreted, and were far likelier to sprout and grow if dropped in sunny sites. Back in Boston, in various experiments and field trials, Del Tredici confirmed that ginkgo seed germination rates soared (71 percent versus 15 percent) minus the smelly sarcotesta (as would happen when eaten and excreted). “During the Cretaceous,” he wrote, “potential dispersal agents included mammals, birds, and carnivorous dinosaurs.”

gin 4.bmp

As cumbersome as G. biloba’s sex life is, it, too, has served an evolutionary purpose. As Del Tredici and other botanists studied the tree’s reproductive cycle, he began conducting experiments at the arboretum—both in the greenhouse and outdoors—growing seeds from Guizhou and Boston ginkgos, further confirming that all “aspects of Ginkgo’s sexual reproductive cycle are strongly influenced by temperature.” During the Ice Age, he wrote in a review paper, “Such a trait would have allowed this species to reproduce successfully in regions of the Northern Hemisphere that were undergoing dramatic cooling after a long period of stable warm conditions…Ginkgo biloba’s temperature-sensitive embryo developmental-delay mechanism could well have been another climate-induced Cretaceous innovation—an evolutionarily primitive, but ecologically functional, form of seed dormancy.” Ginkgo seeds do not try to grow until the weather favors their survival. Between 1953 and 2000 in Japan, the temperature-sensitive Ginkgo adjusted to the warming climate by extending its growing period: four days earlier each spring and eight days longer in the fall.

Like “Chinese” Wilson, Peter Del Tredici loved botanizing in China, a place he has visited eight more times and calls “Horticultural Heaven.” He has worked with many Chinese colleagues, and said they have now taken the lead in researching ginkgo, a national symbol of their botanical heritage. Ginkgo DNA is three times larger than human DNA and is unlikely to be fully sequenced anytime soon, but by using smaller snippets for DNA testing in 2008, botanist Wei Gong and her colleagues confirmed Del Tredici’s 1989 find of wild ginkgo growing on the slopes of Tian Mu Mountain. The Chinese also confirmed that several other small wild ginkgo remnants displayed “a significantly higher degree of genetic diversity than populations in other parts of the country.” In some of these forests, growing near peoples with no history of gathering ginkgo fruits, there are young ginkgos growing. Although no one knows for sure where Ginkgo originated, it’s now clear that during the Ice Age, the southwest mountains of China served as refugia. Subsequent DNA studies have also shown that China is the ultimate source of all the world’s cultivated ginkgos.

Many of Ginkgo’s mysteries are probably unsolvable. Did it once have a pollinator? We will never truly know, said Del Tredici, “why Ginkgo is still around when all of its relatives have gone extinct…many of its life-history traits evolved under conditions that no longer exist, which makes reconstructing its ecological niche difficult to establish.” What, for instance, he continued, were “its original dispersal agents? What role did the medically active chemicals it produces play in its evolution? Were they feeding deterrents? I assume Ginkgo survived because it was somehow able to remain competitive with flowering plants, but in what ways was it different from species that went extinct? For all intents and purposes, Ginkgo has stopped evolving.”

For decades now, Del Tredici has been gathering ginkgo seeds and cuttings from historic and unusual trees, and he recently planted a large hillside in the arboretum with some of his more prized specimens, part of a larger grove of young trees that are all deciduous gymnosperms: larches, golden larches, dawn redwoods, and bald cypresses. He expects that when Harvard has to renegotiate the lease for the arboretum in 861 years, the ginkgos will be looking pretty magnificent.

Until then, when next you pass a ginkgo on a busy street, remember you are looking at a mysterious species that shared the earth with dinosaurs. “As remarkable as Ginkgo’s evolutionary survival is,” said Del Tredici, “the fact that it grows vigorously in the modern urban environment is no less dramatic. Having survived the climatic vicissitudes of the past 120 million years, ginkgo is clearly well prepared—or, more precisely, preadapted—to handle the climatic uncertainties that seem to be looming in the not-too-distant future. Indeed, should the human race succeed in wiping itself out over the course of the next few centuries, we can take some comfort in the knowledge that the ginkgo tree will survive.”  

The Language of Genetics

denis.bmp Denis Alexander is this week’s guest blogger. He has spent 40 years in the biological research community in various parts of the world, latterly as Head of the Laboratory of Lymphocyte Signalling and Development at The Babraham Institute, Cambridge which he left in 2008. Since then he has been heading up the new Faraday Institute for Science and Religion at St. Edmund’s College, Cambridge, where he is a fellow.

I have always been fascinated with the public understanding of science, including the many and varied ways in which scientific ideas can migrate out of the lab to populate the worlds of politics, sociology, popular culture and religion. Since finally closing down my research group in immunology a few years ago, I have had the privilege of indulging some of these interests more fully in a way that the pressures of an active research life didn’t really allow.

Recently we brought a group of historians and philosophers to Cambridge to sit round a table for a few days and discuss all the varied ways in which biology has been used and abused for non-biological purposes from 1600 to the present day. So many are the examples that our challenge was not to find sufficient topics or authors, but to restrict ourselves to a series that would eventually lead to a book of reasonable length. The outcome was Biology and Ideology – From Descartes to Dawkins which came out last year [Denis Alexander and Ronald Numbers (eds), Chicago University Press, 2010]. In turn this interest is leading on to a grants programme in which competitive funding applications will be received during this coming year for research on contemporary ways in which biological ideas are being used for good or for ill, purposes well beyond their original scientific contexts.

The area of genetics is one that seems particularly prone to being reported in the media or in the public domain more generally in dramatised ways that often distort the actual science involved. I was therefore particularly pleased to be approached by a publisher recently to write an introductory book on genetics that would not only introduce the science for a general readership, but also address some of the wider ethical and other questions that genetics raises concerning human value and identity. The result is The Language of Genetics – an Introduction [Darton, Longman and Todd, 14 June 2011] published just a few days ago [N.B. although Amazon has some good offers the Faraday Shop is selling at £12/copy plus p&p starting soon after 27th June].

I am a great believer in making a clear distinction between science and the wider issues that arise from science, finding that when the language and concepts of different disciplines are co-mingled, confusion inevitably results. The Language of Genetics therefore has 11 chapters of straight explanatory science, whereas the wider questions arising from genetics are contained within the final chapter 12.

One of the topics I tackle there is the pervasive idea of genetic determinism – that there are such things as genes “for” musicality, intelligence or being a political liberal. Although I think biologists, with rare and unfortunate exceptions, are generally rather careful to describe in their scientific writings what genes actually do, by the time their discoveries get reported in the media, the head-line for the story too often ends up implying that some complex human behavioural trait is largely determined by a single gene.

The genome wide association studies (GWAS) that have proliferated over the past few years are instructive in this respect. One study was carried out on the variation in height between humans, a trait known to be around 70-80% inheritable. The study based on 180,000 individuals came up with 180 different variant gene regions that correlate with variation in height, yet taken together they explain only around 10% of the inheritability. There is a huge amount of “missing inheritability”. Where is it? Being a bit taller or shorter is complex, involving many aspects of our physical being.

Imagine now the genetics of some complex human behaviour which has a supposed element of inheritability, together with our brains with their 10¹¹ neurons and 10¹4 synapses (the precise number, rather unsurprisingly, depends on the precise volume of your brain) – such a scenario does not readily lend itself to interpretations that depend on genetic determinism.

None of this is to say that genetic variation is irrelevant to who we are as individuals – far from it. But The Language of Genetics highlights the way in which the fertilised egg, with its newly acquired unique genome, is from its very first day onwards in intimate interaction with its environment in all its myriad aspects. Rather than reifying the ‘genome’ and the ‘environment’ as if they were separate entities, it is biologically more accurate to see both aspects as thoroughly intertwined. The fascinating fields of evo-devo (chapter 3) and of epigenetics (chapter 10) do much to highlight that insight.

The science of genetics is a fantastic gift to humankind if used wisely. But the greatest gifts can be the most abused; the best protection remains continued awareness and vigilance.

Science owes much to both Christianity and the Middle Ages

James.JPGThis week’s guest blogger is James Hannam, he has a PhD in the History and Philosophy of Science from the University of Cambridge and is the author of The Genesis of Science: How the Christian Middle Ages Launched the Scientific Revolution (published in the UK as God’s Philosophers: How the Medieval World Laid the Foundations of Modern Science).

The award of the Templeton Prize to the retired president of the Royal Society, Martin Rees, has reawakened the controversy over science and religion. I have had the pleasure of meeting Lord Rees a couple of times, including when my book God’s Philosophers (newly released in the US as The Genesis of Science) was shortlisted for the Royal Society science book prize. I doubt he has welcomed the fuss over the Templeton Foundation, but neither will he be particularly perturbed by it.

the genesis of science.JPGFew topics are as open to misunderstanding as the relationship between faith and reason. The ongoing clash of creationism with evolution obscures the fact that Christianity has actually had a far more positive role to play in the history of science than commonly believed. Indeed, many of the alleged examples of religion holding back scientific progress turn out to be bogus. For instance, the Church has never taught that the Earth is flat and, in the Middle Ages, no one thought so anyway. Popes haven’t tried to ban zero, human dissection or lightening rods, let alone excommunicate Halley’s Comet. No one, I am pleased to say, was ever burnt at the stake for scientific ideas. Yet, all these stories are still regularly trotted out as examples of clerical intransigence in the face of scientific progress.

Admittedly, Galileo was put on trial for claiming it is a fact that the Earth goes around the sun, rather than just a hypothesis as the Catholic Church demanded. Still, historians have found that even his trial was as much a case of papal egotism as scientific conservatism. It hardly deserves to overshadow all the support that the Church has given to scientific investigation over the centuries.

That support took several forms. One was simply financial. Until the French Revolution, the Catholic Church was the leading sponsor of scientific research. Starting in the Middle Ages, it paid for priests, monks and friars to study at the universities. The church even insisted that science and mathematics should be a compulsory part of the syllabus. And after some debate, it accepted that Greek and Arabic natural philosophy were essential tools for defending the faith. By the seventeenth century, the Jesuit order had become the leading scientific organisation in Europe, publishing thousands of papers and spreading new discoveries around the world. The cathedrals themselves were designed to double up as astronomical observatories to allow ever more accurate determination of the calendar. And of course, modern genetics was founded by a future abbot growing peas in the monastic garden.

god designing uni.bmpBut religious support for science took deeper forms as well. It was only during the nineteenth century that science began to have any practical applications. Technology had ploughed its own furrow up until the 1830s when the German chemical industry started to employ their first PhDs. Before then, the only reason to study science was curiosity or religious piety. Christians believed that God created the universe and ordained the laws of nature. To study the natural world was to admire the work of God. This could be a religious duty and inspire science when there were few other reasons to bother with it. It was faith that led Copernicus to reject the ugly Ptolemaic universe; that drove Johannes Kepler to discover the constitution of the solar system; and that convinced James Clerk Maxwell he could reduce electromagnetism to a set of equations so elegant they take the breathe away.

Given that the Church has not been an enemy to science, it is less surprising to find that the era which was most dominated by Christian faith, the Middle Ages, was a time of innovation and progress. Inventions like the mechanical clock, glasses, printing and accountancy all burst onto the scene in the late medieval period. In the field of physics, scholars have now found medieval theories about accelerated motion, the rotation of the earth and inertia embedded in the works of Copernicus and Galileo. Even the so-called “dark ages” from 500AD to 1000AD were actually a time of advance after the trough that followed the fall of Rome. Agricultural productivity soared with the use of heavy ploughs, horse collars, crop rotation and watermills, leading to a rapid increase in population.

It was only during the “enlightenment” that the idea took root that Christianity had been a serious impediment to science. Voltaire and his fellow philosophes opposed the Catholic Church because of its close association with France’s absolute monarchy. Accusing clerics of holding back scientific development was a safe way to make a political point. The cudgels were later taken up by TH Huxley, Darwin’s bulldog, in his struggle to free English science from any sort of clerical influence. Creationism did the rest of the job of persuading the public that Christianity and science are doomed to perpetual antagonism.

Nonetheless, today, science and religion are the two most powerful intellectual forces on the planet. Both are capable of doing enormous good, but their chances of doing so are much greater if they can work together. The award of the Templeton Prize to Lord Rees is a small step in the right direction.

The Genesis of Science: How the Christian Middle Ages Launched the Scientific Revolution is available now.

Shortlisted for the Royal Society Science Book Prize

Well-researched and hugely enjoyable.” New Scientist

“A spirited jaunt through centuries of scientific development… captures the wonder of the medieval world: its inspirational curiosity and its engaging strangeness.” Sunday Times

“This book contains much valuable material summarised with commendable no-nonsense clarity… James Hannam has done a fine job of knocking down an old caricature.” Sunday Telegraph

Science: A Four Thousand Year History

pat 2.JPG

This weeks guest blogger, Patricia Fara, discusses some problems she faced when deciding how to begin her most recent book, Science: A Four Thousand Year HistoryShe lectures on the history of science at Cambridge University, where she is Senior Tutor of Clare College. Her other successful books include Newton: The Making of Genius (2002), Sex, Botany and Empire (2003) and Pandora’s Breeches: Women, Science and Power in the Enlightenment (2004).

Lewis Carroll knew how difficult it can be to tell a story. ‘Where shall I begin, please your Majesty?’, asked the White Rabbit. Alice listened for the answer. ‘Begin at the beginning,’ the King said, gravely, ‘and go on till you come to the end: then stop.’

To write Science: A Four Thousand Year History, I had to decide when science began. This is no trivial question, but gets right to the heart of what science might be. Looking back at the past, it is possible to pick out ideas and discoveries that later became incorporated within today’s global scientific enterprise. But at the time, they contributed to other goals – finding an auspicious time for religious festivals, winning wars, vindicating biblical prophecies, making a living.

Separating science from superstition is not always easy. When early astronomical observers looked up into the heavens, they saw seven planets circling around the Earth. The Sun and the Moon were the most obvious, but they also identified five others – Saturn, Jupiter, Mars, Venus and Mercury (the next one to be discovered, Uranus, was only spotted at the end of the eighteenth century). Finding planets, and working out how they move across the sky, demands skills that are important for modern science. On the other hand, the first sky-watchers were not primarily interested in how the universe operates, but instead were trying to relate the patterns of the stars to major events on earth, such as famines, floods or the death of a king.

Planets.gif

So it seems wrong to call them scientists. But does it make sense to disparage their conclusions? Modern astronomy rests on a foundation of data collected by expert star-gazers who were also astrologers. Their observations were generally sound, even if their theories have since been rejected. Many scientists find it hard to accept that their own expertise is rooted in beliefs which they dismiss as magic. For those who pledge their faith in progress, magical mumbo-jumbo has been eliminated by scientific reason: magic and science are clearly polar opposites, and any notion that they might share common origins is sacrilegious. But this comforting view is not always easy to reconcile with the historical facts.

newton1-1.JPGConsider Isaac Newton. He believed so firmly in the Greek idea of a harmonic universe that he divided the rainbow into seven colours to correspond with the musical scale. Before then, although opinions varied, artists mostly showed rainbows with four colours. It is, of course, impossible to make any objective decision about the correct number, because the spectrum of visible light varies continuously: there is no sharp cut-off between bands of different colours, so how you think about a rainbow affects how you see it. Be honest – can you tell the difference between blue, indigo and violet?

Since Newton has become an iconic scientific genius, it would seem strange to say that he did not practise science. On the other hand, modern scientists denigrate many of his activities as ridiculous, or even antithetical to science. In addition to his preoccupation with numbers and biblical interpretation, Newton carried out alchemical experiments, poring over ancient texts and careful recording his own thoughts and discoveries. This was no mere hobby: Newton regarded alchemy as a vital route to knowledge and self-improvement, and he incorporated his findings within his astronomical theories.

The example of Newton illustrates how hard it is to pin down exactly when science began. One possibility is to look for the first scientists. But the word scientist was not even invented until 1833, and even then was slow to catch on. Both Michael Faraday and Charles Darwin refused to let themselves be labelled with the new term, but a history that excludes them would seem strange. The most popular starting date is 1543, when Nicolas Copernicus suggested that the Sun and not the Earth lies at the centre of our planetary system. However, there are several objections to this choice, not least that it excludes the Islamic sages whose ideas were so significant in Renaissance Europe, and also the Greeks, whose theories remained influential well into the eighteenth century. So some historians decide choose to begin with the geometer Thales of Miletus, who lived on the Turkish coast around 2500 years ago, and successfully predicted an eclipse. But picking him results in leaving out all of his important predecessors, such as the Egyptians and the Babylonians.

Babylonian 'Queen of teh Night'For Science: A Four Thousand Year History, I decided to start with the Babylonians, whose way of thinking about the universe still affects modern science. Instead of counting in tens and hundreds, they used a base of sixty, which is why there are 360 degrees in a circle. Their complex mathematical techniques and detailed star observations enabled them to predict celestial events – and because their knowledge of the skies was inherited by later observers, it now forms the basis of astronomy as well as structuring everyday life. Thanks to the Babylonians, weeks have seven days, hours have sixty minutes, and minutes have sixty seconds. The next time you look at a digital clock, remember that it has more in common with a clay tablet than you might think.

The Meeting of Minds

Manjit Kumar.JPGThis week’s guest blogger is Manjit Kumar. Manjit’s book_, Quantum: Einstein, Bohr and the Great Debate, is about the nature of reality, and was shortlisted for the 2009 BBC Samuel Johnson Prize for Non-fiction. He writes and reviews regularly for a variety of publications, including The Guardian, The Independent, The Times and the New Scientist. He used to edit a journal called Prometheus that covers the arts and sciences, and he was also the consulting science editor at UK Wired._

I first saw the photograph of those gathered at the fifth Solvay conference, which was held in Brussels from 24 to 29 October 1927, in a biography of Albert Einstein. This was in 1979, when I was just 16. I wondered what brought these people together, and soon learned that the picture included most of the key players involved in the discovery of the quantum, and the subsequent development of quantum physics. With 17 of the 29 invited eventually earning a Nobel Prize, the conference was one of the most spectacular meetings of minds ever held.

1927 Solvay Conference.jpg

When I was 18, I was given a print of the above photograph as a present. Many years later I began to think about it as a possible starting point for a book about the quantum. In the photograph there are nine seated in the front row. Eight men, and one woman; six have Nobel Prizes in either physics or chemistry. The woman has two, one for physics, awarded in 1903, and another for chemistry, awarded in 1911. It could only be Marie Curie. In the centre, the place of honour, sits Albert Einstein. Looking straight ahead, gripping the chair with his right hand, he seems ill at ease. Is it the winged collar and tie that are causing him discomfort, or is it what he has heard during the preceding week? At the end of the second row, on the right, is Niels Bohr, looking relaxed with a half-whimsical smile. It had been a good conference for him. Nevertheless, Bohr would be returning to Denmark disappointed that he had failed to convince Einstein to adopt his Copenhagen interpretation_ of what quantum mechanics revealed about the nature of reality.

Instead of yielding, Einstein had spent the week attempting to show that quantum mechanics was inconsistent, that Bohr’s ‘Copenhagen interpretation’ was flawed. Einstein said years later that:

This theory reminds me a little of the system of delusions of an exceedingly intelligent paranoic, concocted of incoherent elements of thoughts.

It was Max Planck, sitting on Marie Curie’s right, holding his hat and cigar, who discovered the quantum. In 1900 he was forced to accept that the energy of light, and all other forms of electromagnetic radiation, could only be emitted or absorbed by matter in bits, bundled up in various sizes. ‘Quantum’ was the name Planck gave to an individual packet of energy, with ‘quanta’ being the plural. The quantum of energy was a radical break with the long-established idea that energy was emitted or absorbed continuously, like water flowing from a tap. In the everyday world of the macroscopic, where the physics of Newton ruled supreme, water could drip from a tap, but energy was not exchanged in droplets of varying size. However, the atomic and subatomic level of reality was the domain of the quantum.

Bohr discovered that the energy of an electron inside an atom was ‘quantised’; it could possess only certain amounts of energy and not others. The same was true of other physical properties, as the microscopic realm was found to be lumpy and discontinuous. Not some shrunken version of the large-scale world that we humans inhabit, where physical properties vary smoothly and continuously, where going from A to C means passing through B. Quantum physics, however, revealed that an electron in an atom can be in one place, and then, as if by magic, reappear in another without ever being anywhere in between, by emitting or absorbing a quantum of energy.

By the early 1920s, it had long been apparent that the advance of quantum physics on an ad hoc, piecemeal basis, had left it without solid foundations or a logical structure. Out of this state of confusion and crisis emerged a bold new theory; known as quantum mechanics_, with Werner Heisenberg and Erwin Schrödinger,Schr%C3%B6dinger third and sixth from the right in the back row, leading the way. In 1927 Heisenberg made a discovery. It was so at odds with common sense that he initially struggled to grasp its significance. The uncertainty principle said that if you want to know the exact velocity of a particle, then you cannot know its exact location, and vice versa.

Bohr believed he knew how to interpret the equations of quantum mechanics; what the theory was saying about the nature of reality. Questions about cause and effect, or whether the moon exists when no one is looking at it, had been the preserve of philosophers since the time of Plato and Aristotle. However, after the emergence of quantum mechanics they were being discussed by the twentieth century’s greatest physicists.

The debate that began between Einstein and Bohr at the Solvay conference in 1927, raised issues that continue to preoccupy many physicists and philosophers to this day; what is the nature of reality, and what kind of description of reality should be regarded as meaningful?

No more profound intellectual debate has ever been conducted’, claimed the scientist and novelist CP Snow. ‘_It is a pity that the debate, because of its nature, can’t be common currency_.’

When Einstein and Bohr first met in Berlin in 1920, each found an intellectual sparring partner who would, without bitterness or rancour, push and prod the other into refining and sharpening his thinking about the quantum. ‘It was a heroic time,’ recalled Robert Oppenheimer, who was a student in the 1920s. ‘It was a period of patient work in the laboratory, of crucial experiments and daring action, of many false starts and many untenable conjectures. It was a time of earnest correspondence and hurried conferences, of debate, criticism and brilliant mathematical improvisation. For those who participated it was a time of creation.’

Planck, Einstein, Bohr, Heisenberg, Schrodinger, Born, Pauli, De Broglie, Dirac, the leading lights of the quantum revolution, are all there in that picture.

What is milk?

atkins_p_08.jpgPeter Atkins is Professor of Geography at Durham University . His main research interest is in food and drink, with particular reference to their materiality; the stuff in foodstuffs. His work ranges from arsenic poisoning in the groundwater in Bangladesh to a history of milk. His latest book, Liquid Materialities: a History of Milk, Science and the Law was published in 2010.

What is milk? It may sound like a trivial question or an inappropriate one for a serious science blog. Why should we take any interest at all in a substance that is a matter of everyday consumption? Put on the spot, most people would say that milk is a rather dull commodity and something they take for granted. The Spanish have a saying: blanco y en botella, leche. Literally this is if it’s white and in a bottle then it’s milk. However, slipped colloquially into conversation it means it’s obvious. My purpose in this post is to show that a discussion of milk is far from obvious and indeed is something that cannot be left to dairy science alone. We need to look beyond that to understand why milk is as it is today, and ultimately, what’s at stake is the quality of all of the food we consume.

The laboratory-based analysis of milk has its origins in the late eighteenth century. In the 1790s Parmentier and Deyeux were already estimating its constituents with simple experiments. They were followed in the early nineteenth century by other French, Swedish and German scientists. But milk is a complex emulsion of fat globules and water, and a fine dispersion or suspension of casein micelles, so at the time, it was very difficult to know what was in it, given the limited techniques of organic chemistry and physics. It was eventually realised that milk was a highly variable substance; its constituents can vary among mammals and during feeding time. Its principal constituents, fat, protein and sugar, also differ from one breed of dairy cow to another.

Quevenn.JPGSo what? you might ask. Well, food in the nineteenth century was frequently adulterated, and milk was the most notorious example because its dense whiteness enabled the addition of small amounts of water without anyone noticing. The average pint in London in the 1870s, for instance, contained about 25% of added water. Consumers were outraged by the unreliable quality. One simple method of analysis used by the milk trade was the lactometer, which measured the specific gravity of milk, but this had to be abandoned when it was realised that, by adding water and removing some of the butterfat, it was possible to simulate the physical properties of genuine milk.

Gravimetric and volumetric chemistry eventually made progress and adulterators were brought to book under a series of Sale of Food and Drugs Acts that started in 1860. The irony was that many innocent farmers were prosecuted before anyone thought to establish a legal definition of the real thing. This came about in 1901 with the Sale of Milk Regulations. In effect, it claimed that science could determine nature’s intentions. Natural cow milk was said to contain 3.0% of butterfat, for instance, and a milk that was more watery than this was presumed to have been fraudulently manipulated.

Problem solved? Well no, because what happened if cattle were fed on very watery grass or silage? The milk they produced would be as it came from the cow, nothing added and nothing taken away. However, it would still be of a low quality, fat-wise. Legal challenges in the early twentieth century proved that almost any milk coming from a healthy cow was acceptable, as long as it was not modified later.

MeasuringMilk.JPGFrom 1901 to 1976 this whole milk idea remained the British consensus. Elsewhere, on the continent, a completely different approach prevailed. Countries such as the Netherlands had butter industries where it was in their economic interest to regard some extraction of fat as normal. This led to fixed, legal limits of quality, and later to the standardization of the constituents of milk. Britain’s entry into the European Community in the 1970s, meant accepting some legal definitions of foods. From 1981, it was possible for the first time to buy ‘semi-skimmed’ milk. Then, in 1933, milk with a standardized composition had to be allowed with the beginning of the Single milk market. However, it has only been since the Drinking Milk Regulations of 2008, that at last milk could be labelled with various fat levels. cartoon Rosenau 1912 p26.JPG

When you next go to the supermarket, have a look at the dairy shelves. You’ll find an astonishing range of milk. In addition to flavoured or filtered or fortified milk, you will find milk with 0.1%, 1%, 2% and 4% fat, and the consumer in England and Wales (but not Scotland) can also choose between raw milk and heated treated milks that have been pasteurized, sterilized or ultra heat treated. There is also homogenized and organic cow’s milk, not to mention goat’s milk and soya milk.

I’m not saying that these new Euro definitions of quality are better or worse, but they are certainly different from the long history of milk in Britain. It is almost as if milk has had its own life story and we can now write its biography. It seems that most milk drinkers are oblivious to this story and are now content that it is technology that defines what is genuine and natural. We no longer feel any obligation for our diet to reflect the foibles and the cycles of nature. We are now sure that we can improve upon nature by producing a substance which has a substantial human imprint. Finally, milk still resists us by turning sour and by persisting in being an ideal medium for the spread of disease, but both of these problems are susceptible to industrial processing.

cartoon.JPG