A panel of advisers to the US National Children’s Study met at a crucial juncture today, as leaders of the study prepare to choose a sampling strategy for the multibillion-dollar effort to document influences on the health of 100,000 children from before birth to age 21.
Both the panel’s members and the observers who spoke at the day-long meeting in Bethesda, Maryland, argued nearly unanimously for what epidemiologists call a “national probability sample” — a scientifically chosen, geographically distributed group of children who reflect the diversity of the US population and from whom findings can therefore be generalized to all US children.
The study’s leaders at the National Institutes of Health (NIH), which oversees and funds the study, told Congress in this budget document in February that they would be dropping this approach in favour of a “well-described cohort followed longitudinally” that would not be representative of the entire US population.
Study director Steven Hirschfeld told participants at the outset of today’s meeting that he would be convening federal statisticians in a webcast meeting on 29 May, as well as meeting with the study’s principal investigators on Thursday, before developing a final sampling design in consultation with Francis Collins, the NIH director, and Alan Guttmacher, who directs the NIH child-health institute, where the study is housed.
Today’s meeting, Guttmacher added, “gives us extremely important thought and input about what the sampling strategy should look like.”
Included in that input was an 18-page paper circulated at the meeting, signed off by 31 principal investigators representing 28 of the study’s 40 pilot sites, which have been coming online since 2009. Two other principal investigators who attended the meeting said that they agreed with the document’s scientific approach, but had not signed it because of concerns unrelated to its science.
Entitled “A Cost-Effective and Feasible Design for the National Children’s Study Recommendations from the Field“, the document argues strongly for a probability sampling approach and for retention of the 105 counties in 43 US states that have already been designated as study locations. It says that both can be achieved by recruiting from health-care providers’ offices — and all within budget. (The study, first authorized by Congress in 2000, is budgeted to cost about US$3 billion, but there have been concerns about cost overruns. To date, it has spent more than $750 million, the document notes.)
It was not only the principal investigators at today’s meeting who shied away from giving up on a national probability sample that would be generalizable to the US population.
“I’m greatly concerned about a convenience sample,” said Edward Sondik, an ex officio member of the advisory committee who is director of the National Center for Health Statistics, part of the US Centers for Disease Control and Prevention in Atlanta, Georgia. “About whether it can be generalized, and how — whether the science community would accept a convenience sample in a study of this magnitude.”
Joseph Konstan, a professor of computer science and engineering at the University of Minnesota in Minneapolis, described the conundrum that is before the study’s managers and advisers succinctly: they are looking, he said, for a study that is “cheap, representative, large and deep. I’m starting with the assumption that we can’t get more money, and so we are trading off among the other three.”
Asked whether cost concerns will force the study’s leaders to abandon a sample including all 105 designated US counties, regardless of what sample design they adopt, Hirschfeld said: “We hope that we’re guided by science and data and that the final determinant isn’t finances… With that principle, our response remains that we don’t know yet what locations will be in the sampling frame or not… What we have to convey to all the locations is: ‘Stay tuned.’ ”
Hirschfeld said at the meeting that a final design may be proposed as soon as the advisory committee’s next meeting, which is in July.
Correction: An earlier version of this blog incorrectly stated that a paper circulated at the meeting was signed off by principal investigators at 31 rather than 28 pilot sites.