At a Maryland meeting of agricultural scientists, practitioners, educators, and librarians last fall, Kay Dickersin of Johns Hopkins University related a cautionary tale from the field of medicine. In the early 1990s, a group of medical researchers set out to find every single published clinical trial on thrombolytic or “clot busting” drugs. Thirty-three such studies involving nearly 40,000 patients were performed between 1959 and 1988. And when the team painstakingly synthesized and analyzed the accumulated data, the results showed without question that thrombolytic therapy reduced the incidence of death after heart attack.
Yet when the analysis appeared in 1992 in the New England Journal of Medicine, it wasn’t the medication’s effectiveness that got people’s attention. What astonished everyone was how early the drug’s efficacy had been statistically proven. The p value for the treatment effect was less than 0.01 by 1973, after only eight trials and 2,432 patients. All the subsequent 25 trials did was narrow the 95% confidence interval around an already statistically significant finding.
Doctors, in other words, probably could have stopped studying and started prescribing clot-busting drugs to all heart attack victims around 1975. “But did we? No,” said Dickersin. “We kept giving them consent forms saying, ‘We don’t know if this works. So would you please agree to be randomized to a placebo or to thrombolytic drug therapy so we can figure it out?’ ” The reason was simple, she continued: For 30 years, no one had paused to review the accumulated evidence.
The consequences taught the medical community a crucial lesson. “We simply must keep track of what we know,” Dickersin said. “We simply must, in a systematic way.”
Dickersin was invited to speak at the “Smarter Agriculture” workshop in Potomac, MD, last October because a small, but growing, group of agricultural researchers and practitioners feel the same way. As in medicine, agronomists and crop scientists struggle to keep up with the massive research literature. Data are stored in pieces everywhere, and how they’re collected often isn’t standardized or fully reported. As a result, experiments are sometimes needlessly repeated. Recommendations and on-farm practices are based on partial information. And when practices don’t work, it can be hard to tell why.
In the years after these same shortcomings were exposed in medicine, the global medical community established a deliberate process for bringing the best scientific evidence to medical practice. Called “evidence-based medicine,” the approach strives to round up and evaluate all high quality data on the efficacy of medical interventions and apply the synthesized findings to patient care. The tools it employs are systematic review and meta-analysis. And medicine is not alone. Ecology, sociology, education, and criminal justice are increasingly using the same methods to monitor the status of their fields and discover which practices are working and which aren’t.
Now agricultural scientists like Sylvie Brouder, a Purdue University agronomy professor and extension educator, are calling on their discipline to adopt a similar model of “evidence-based agriculture.” “The moment is now or perhaps it was yesterday,” says Brouder, an ASA Fellow and CSSA and SSSA member. “This represents a need, but also a tremendous opportunity.”
A Necessary, but Difficult Transformation
Of course, extension personnel have always relied on evidence to make recommendations, points out Brouder, who with Jeff Volenec and other colleagues at Purdue organized the Smarter Agriculture workshop as a first, tiny step toward the new model. What’s increasingly clear, however, is that “we can’t do it anymore in the same way because there are just too many data and knowledge fragments that need to be brought together,” she says. “We’re not going to be able to put new research to its best use if we don’t have this infrastructure.”
There also may be little choice. Dwindling funding for agricultural research makes it imperative to use every dataset to its fullest, especially when good data are “incredibly expensive to acquire,” Brouder says. The public is demanding to know how farm practices are developed in the first place, and whether they’re actually working to cut erosion, boost water quality, and meet other environmental goals. Perhaps most importantly, the Obama Administration mandated in February 2013 that the results of all federally funded scientific research be made freely accessible to the public within one year of publication.
“That’s a game changer,” says Volenec, an ASA and CSSA Fellow. “It had been talked about, but now we have to make all of our data publicly available and find a way to do that.”
He and Brouder both caution, though, that the transformation won’t happen overnight. “It’s going to take a lot of work to make this change,” Brouder says. Fortunately there are excellent models to follow, and among the best is the Cochrane Collaboration.
The nonprofit organization is named for Archie Cochrane, a British epidemiologist who began ruminating on the need for scientific evidence in medicine after serving as a medical officer during World War II. His landmark 1972 book on the topic, Effectiveness and Efficiency: Random Reflections on Health Services, is still influential today. But it was his graduate student, Iain Chalmers, who put Cochrane’s pioneering ideas into action.
Like Cochrane, Chalmers started questioning the basis of his medical training while working as an obstetrician in the 1970s in Gaza. “Some of the treatments I had been taught to give at medical school were actually harming, and sometimes killing, my patients,” the British medical researcher recalled in a 2006 interview with The Lancet. So, after completing his master’s degree with Cochrane, Chalmers resolved to bring together data from every randomized trial he could find on care in pregnancy, childbirth, and the neonatal period. In 1989, he completed that first systematic review with the help of friends, including Dickersin. He then founded the U.K. Cochrane Center in 1992 to continue synthesizing the research evidence in other medical fields.
Today the collaboration has grown to 14 centers worldwide (Dickersin directs the U.S. center), with more than 31,000 reviewers in 120 countries. And collectively, they’ve now produced some 5,600 systematic reviews. Each one is available on www.cochrane.org, not only in technical format for the academic community, but also as accessible summaries that doctors, clinics, policymakers, and consumers can use to make healthcare decisions and create guidelines.
It’s this combination of careful, scientific synthesis and user-friendliness that impresses ASA and SSSA Fellow Paul Fixen of the International Plant Nutrition Institute (IPNI) the most. “They’ve made a deliberate attempt to get data synthesized and summarized in a way that has meaning to the practitioner—whether in healthcare or, in our case, crop care,” he says. That doesn’t mean crop advisers aren’t using synthesized evidence now, Fixen quickly points out. The difference lies in the thoroughness and transparency of the process.
“We have done a pretty poor job in the past, I think, of connecting recommendations with the data they’re based on. So if we can make that more transparent, I think we gain a significant amount of credibility because the science and the recommendation and the practice are all connected.”
The Systematic Review
The first step in all of this, though, is the systematic review. Unlike the regular review articles that appear in many journals, a systematic review follows strict standards for gathering, evaluating, and combining all the studies on a topic—much like the rules scientists follow when they do empirical research. The search for papers is so meticulous and structured, for example, that it may take six months to find and sort through all the hits. Authors are also expected to develop and stick to explicit criteria for deciding which articles and data will be included in the review and which will not, a feature that helps guard against bias.
Some systematic reviews also include meta-analysis, where statistics are applied to data gathered during the review. But even without this quantitative step, the average Cochrane review takes about two years and three people to complete, Dickersin says.
It’s a daunting outlay of time and effort, but the North American fertilizer industry is betting the investment will be worth it. Last April, the industry launched its 4R Nutrient Stewardship Research Fund to support studies on the impacts of the nutrient management principles known as the “4Rs” (applying the right fertilizer, at the right time and rate, and in the right place). There’s still a “huge need” to demonstrate the effectiveness of 4R practices, says Fixen, who serves on the fund’s technical advisory committee. But the subject is also already awash in data, and that had the group concerned.
“The sentiment was that we simply don’t have the dollars to reinvent the wheel,” Fixen says. “[We said] we need to identify the knowledge gaps and then support the research necessary to fill them.”
So, in its first request for proposals (RFP) last October, the fund called specifically for systematic reviews and meta-analyses, which it will support to the tune of $300,000. In the meantime, a second RFP for $500,000 worth of new research projects has been issued. “But it’s very telling that the first one out of the chute is looking backward and reviewing what we already know,” Fixen says.
Similarly, Purdue professor and SSSA member Ron Turco wants his graduate students to undertake a meta-analysis before beginning their dissertation projects. Otherwise, he finds they aren’t really capable of identifying the critical knowledge gaps in the literature.
“They don’t know how to decide what’s important to pursue,” says Turco, who co-organized the Smarter Agriculture workshop last fall. “Meta-analysis allows them to tie a lot of pieces together [before] the start of a project, so they can launch in better direction.”
Turco agrees with Fixen on another point, as well: Meta-analysis and systematic review are desperately needed to address issues of water quality. Huge amounts of time and money are spent on conservation practices to protect and improve it, and yet the same basic question remains: Are those practices actually working? “It’s a great question, except that it’s nearly impossible to answer,” Turco says. The reason, he believes, is the same one that stymied the medical community’s assessment of thrombolytic therapy for so long. Individual studies usually don’t collect enough samples or possess enough statistical power to show a significant effect on their own.
But if those sampling points were pooled and analyzed together, Turco thinks scientists might be shocked to see how many water quality questions have been resolved, or at least more fully resolved than people suspect. At the same time, he recognizes the difficulty of doing this when many established researchers like him were never taught the required skills. He’s now hoping that training for students in meta-analysis becomes a big part of the evidence-based agriculture push.
“I think we’re looking at a generational change, a paradigm shift in how we do things,” he says. “I think the next generation of scientists will be much better at it.”
Reporting Requirements for Journal Articles
Brouder, too, harbors no illusions about the obstacles ahead—especially after attempting a systematic review herself this past summer. As someone who’s always been curious about how different disciplines develop policy and practice from science, Brouder was thrilled by an invitation to review the research underlying conservation practices being used on smallholder farms in developing countries. Then she got started and realized how “naïve” she’d been.
“I say ‘naïve,’ ” she says with a laugh, “because although I was intellectually interested, I had no idea of the magnitude of the task.”
Specifically, she and her colleague, Helena Gómez Macpherson of the Institute of Sustainable Agriculture (CSIC) in Spain, were asked to synthesize the evidence on the effects of zero tillage on crop yields in sub-Saharan Africa and South Asia. A second objective was to assess the quality of the research on these practices. So, Brouder began gathering all the review articles she could uncover on the topic, while Gómez Macpherson prepared to pool high quality data from the primary literature for a meta-analysis. But they both quickly ran into snags.
After studying a paper that outlined eight or so criteria for a good systematic review, for example, Brouder started checking the review articles she’d assembled to see which of these standards they met. “And lo-and-behold,” she says, “they didn’t meet any of them.” Reviewers failed to report the terms they’d used to search the primary literature. Many reviews didn’t even the list the databases and bibliographies that were used. Moreover, little or no information existed on why authors had included certain papers and left out others—making Brouder acutely aware “of the professional bias we all bring [to review articles],” she says.
Meanwhile, Gómez Macpherson was finding most papers she’d collected contained such vague methods that she often couldn’t fully understand the controls, tell if the experimental design was sound, or assess how soil conditions or other factors influenced the results. Much of the data, therefore, couldn’t be included in the meta-analysis without first contacting authors for more information—something she and Brouder didn’t have time for under their deadline.
So, in the end, the pair scrapped the complete systematic review with meta-analysis, adding instead a checklist of minimum data reporting standards needed for a high quality paper in the conservation ag literature. That checklist (modeled, incidentally, after one in the Cochrane Handbook for systematic reviews) and the rest of their analysis was published online in February in Agriculture, Ecosystems, and Environment.
Brouder and Volenec now hope that other agricultural journals—especially those of the Societies—will adopt similar reporting requirements for the papers they publish. It’s not the main reason to pursue the evidence-based model by any means, Volenec says. “But as we begin to discuss what’s needed to do effective systematic reviews downstream, we’ll have to go back upstream and say, ‘Here are the minimum data sets and information about methodologies that must appear in our journals.’ ”
“A lot of the literature can’t be used right now [for synthesis] because it doesn’t contain all the relevant bits and pieces,” Brouder agrees.
Overcoming Bias against Publishing ‘Negative’ Results
But there’s something even bigger standing in the way of successful systematic reviews, according to Brouder: The widespread bias against publishing so-called “negative” results.
Everyone in science knows how experiments that fail to find a significant effect or end up confirming the null hypothesis frequently aren’t reported. Journals often don’t accept these studies, for one, nor do they win acclaim from tenure committees and academic departments. But when findings go unpublished, this also “means the data, which are real, and the results, which are real, aren’t there to contribute to the overall synthesis of what happens through time and space,” Brouder says. In medicine, this has produced scandals like the one in 2004, when people discovered that clinical trial data showing a link between use of selective serotonin reuptake inhibitor (SSSRI) antidepressants and teen suicide had gone unpublished. But the perils of basing interventions on partial and poor-quality data can be just as great in agriculture.
Brouder and Gómez Macpherson were asked to do their systematic review, for instance, because of concerns that conservation practices were hurting yields on smallholder farms. Just as in North America and Europe, Brouder explains, farmers in the developing world are being encouraged to adopt zero tillage to protect the soil, conserve soil water, and meet other conservation aims. But if crop yields drop as a result, not only does this harm the profitability of farms that already make little or no profit, she says; it also sows “massive distrust” in the extension system that may take years to overcome.
And, of course, the stakes are high even on prosperous North American farms, adds Fixen. “You don’t base expensive, high-risk decisions about how to do things in the field on any one [piece of research],” he says. “It’s always a process of synthesis.” That’s why when he, Brouder, and the others describe the various benefits of systematic review and meta-analysis, they all return eventually to one point.
“This whole concept of evidence-based decision-making, if we can get it right and deliver it all the way out the field, should have incredible appeal for the practitioner,” Fixen says.
Those are obviously two gigantic “ifs” with an overwhelming number of issues attached—issues whose resolution will undoubtedly involve serious debate, missteps, uncertainty, and frustration. Yet, the struggle alone is likely to have enormous value, Volenec and the others assert. What it might just do, in fact, is rejuvenate agricultural science.
“All I can say is that I learned a lot,” Brouder notes of her first attempt at systematic review. “It changed my perspective on how one does empirical studies, the profession of extension, and the skills I think I need to have to finish off the second half of my career.”
“Of all the things I’ve seen the Societies doing recently that I think could impact our organization, it’s this initiative,” agrees ASA Fellow and SSSA member Deanna Osmond, a North Carolina State University soil scientist who spoke at the Smarter Agriculture meeting last fall.
“The meeting in Maryland wasn’t just an interesting meeting,” she adds. “It made me excited in a way I haven’t been for years.”