Unreliable’ Articles, ‘Trial By Literature’ Revisited
New York Law Journal
This column revisits a challenging topic that cuts across the spectrum of complex litigation—the reliance upon and use of unreliable hearsay literature by expert testifiers. Often these are technical or scientific articles published in some journal with a claim that the published work product has been “peer reviewed.” In earlier articles, I exposed major problems with overstating the reliability of such out-of-court articles not authored by the testifier. With the increasing trend to “trial by literature,” it seemed high time to revisit the subject. Have things improved? Probably not. Indeed, there seems to be an exacerbation of problems disclosed earlier.
In particular, there has been a global proliferation of journals whose quality review practices function differently from the classic model we used to know. Many so-called “open-access” journals that accept articles charge the author a fee. That dynamic seems to create potential conflicts of interest. Many of these journals publish articles without peer review. Others do a so-called peer review that is laughable and porous. How do we know that? Because a Harvard science journalist recently conducted a sensational “sting” operation, a hoax in which he sent a science article to hundreds of journals. The results are shocking. More about that later. First, to set the stage we offer some background on the problem and our earlier findings.
We need to be honest. There is a place and need for hearsay literature. Much of what we say or do is based on what we learn. Much of what we learn is based on what we read. Much of what we read is based on what others have read. Much of what those others have read is based on what still others have written. And so on. It becomes inevitable, therefore, that, sooner or later, much of expert testimony boils down to what experts have read or learned or confirmed from writings. There’s nothing wrong with that in and of itself.
If the expert enhances his or her expertise by reading scientific, technical or professional writings or benefits others by researching and writing as an expert, society is normally better off for the effort. The “if” in the foregoing premise is important, however, especially when it comes to expert testimony in litigation. Society’s objectives in the courtroom are to search for the truth and do justice. If the writings experts rely on are trustworthy, accurate and professionally reliable, they have potential to enhance the expert’s role, and therefore the jury’s, in the truth-finding process. If the writings are “junk,” however, and the expert relies on them or professes them to be the truth, the expert’s testimony is not better than the junk he or she is reciting.
Sometimes, the quality and trustworthiness of professional writings fall between the extremes of “reliability” and “junk,” into a vast gray area of “quasi-reliability” or “not-quite-reliable” or “not-quite-junk.” The articles may be published by journals with professional-sounding names or by institutions or entities recognized in the technical world, thereby creating an aura of trustworthiness that masks the diminished quality of the substantive content—existing somewhere along the scale in the gray zone (perhaps with enough slivers of accuracy thrown in to help disguise the “junky” portion). What happens when the expert relies on such less-than-reliable professional literature? What should be the consequences of such reliance?
In general, the justice system wants the expert to testify to give juries the benefit of his or her expertise, not to be a mere reader to jurors of out-of-court, hearsay materials. If the expert becomes a mere reader-out-loud of someone else’s thoughts or opinions, then the “someone else” is really doing the testifying, not the expert. That might not be so bad if we could guarantee that the out-of-court writing is genuine, accurate, trustworthy, reliable, relevant and “fits” the facts and issues in the case. But how can we know that? Ordinarily, we cannot cross-examine the writing, and the author of the technical or scientific or specialized hearsay is not in court to answer questions. Only a surrogate—the so-called trial “expert”—is. But he or she often knows only what was stated in the article. Beyond the confines of the actual text, the published findings and explicit writing, the trial expert usually is in the realm of “I don’t know” (if one is truthful) or perhaps speculation (if one indulges in belief or guesswork).
False Findings
Prior columns have reported on serious problems with reliability of many scientific articles, even those published in vaunted science or technical journals.1 So, for example, the entire June 5, 2002, issue of the prestigious Journal of the American Medical Association (JAMA) was devoted to a soul-searching, critical analysis of major shortcomings in the articles’ research, methodologies, and even the peer review process. Important weaknesses often were not reported in the published articles. The published report often masked “the true diversity of opinion among contributors about the meaning of their research findings,” resulting in a de facto “hidden research paper” behind the published article. Results sometimes were selectively reported and the authors “drew unjustified conclusions.” One of JAMA’s analyzers said, “Many readers seem to assume that articles published in peer-reviewed journals are scientifically sound, despite much evidence to the contrary.”2 JAMA’s details were arresting.
Then a respected epidemiologist, John P.A. Ioannidis, issued an article in the respected Public Library of Science Medicine (PLOS), dated Aug. 30, 2005, titled, “Why Most Published Research Findings Are False.” An editorial in the same journal conceded that Dr. Ioannidis had argued “convincingly” and that his claim that most conclusions are false “is probably correct.” All this and even more sources disclosing shortcomings are discussed in my prior articles cited in the endnotes to the instant column.
Why should courts and litigants be concerned about unreliable, junky literature masquerading as the testimony of even a qualified expert? Because reliability of expert testimony is a bedrock principle behind admissibility of the testimony in the first place. Even a qualified expert must give testimony that is both relevant and reliable. If, however, the literature the expert relies on is itself unreliable or partially junky, then his or her testimony can be no better. It might be articulate, it might be slick, it might sound good, but it is no better than the expert’s guess, speculation, conjecture or whim. The justice system demands more. The search for the truth does not depend on the expert’s façade or his parroting of something written in a published journal, especially where the article’s reliability itself is shaky or questionable.
Since we last reported in our September 2011 column about such qualitative shortcomings, have reliability concerns about science literature abated? Has the angst about the quality of the peer review process dissipated? Should courts and litigators still be wary, even suspicious, about the trustworthiness of articles upon which experts rely? Unfortunately, the answer is not only “yes” but “yes” even more emphatically! Here are some reasons why.
A relatively new development over the last decade is the proliferation of so-called “open-access” (OA) journals, hundreds of them, published even by industry giants such as Sage, Elsevier and Wolters Kluwer. Here is what John Bohannon, a science journalist at Harvard University, says about some of them in an Oct. 4, 2013, article in Science Magazine. “From humble and idealistic beginnings a decade ago, open-access scientific journals have mushroomed into a global industry, driven by author publication fees rather than traditional subscriptions. Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”3 Noteworthy is that many of the OA journals use a so-called “gold” open access route that requires the author to pay a fee if the paper is published.4
After a number of experiences involving prospective authors and article reviewers raised suspicions about OA journals’ practices, the Science Magazine editorial staff contacted Bohannon. Intrigued, Bohannon looked into and contacted some of the journals’ websites, editors and reviewers. What he found was disturbing. He decided to submit a science paper of his own under a fictitious name to a Scientific & Academic Publishing Co. (SAP) journal. This would be an experiment “to get to the lay of the shadowy publishing landscape.” In short, this was to be a sting operation. And, to compare the one target to other publications, Bohannon would then have to “replicate the experiment across the entire open-access world.”
‘Sting’ Operation
Bohannon created a “credible but mundane” paper with such “grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable.” The hoax article described a sample test of whether cancer cells grow more slowly in a test tube when treated with increasing concentrations of a molecule. In a second “experiment,” Bohannon wrote that the cells were treated with increasing doses of radiation to simulate cancer radiotherapy. The data were the same across both papers and so were the bogus conclusions: “the molecule is a powerful inhibitor of cancer growth, and it increases the sensitivity of cancer cells to radiotherapy.”
There were numerous “red flags” in the papers. The graph was inconsistent with, indeed the opposite of, the data. Any reviewer with more than a high-school knowledge of chemistry and ability to understand a basic data plot should have spotted the paper’s shortcomings immediately. The hoax paper was sent to 304 OA journals at a rate of about 10 a week. The sting article was accepted by 157 of the journals and rejected by 98. Of the 255 versions that went through the entire editing process to either acceptance or rejection, 60 percent did not undergo peer review. Of the 106 journals that did conduct peer review, some 70 percent accepted the paper. The Public Library of Science (PLOS ONE) was the only journal that called attention to the paper’s potential ethical problems and, so, rejected it within two weeks.
Only 36 of the 304 submissions generated peer review comments recognizing any of the paper’s scientific problems. And 16 of those papers were accepted by the editors “despite the damning reviews.” One-third of the journals targeted in the sting operation were based in India, but the publishing powerhouses that profited from those activities were in Europe and the United States. The U.S., however, was the next largest base, with 29 acceptances and 26 rejections. A major publication called the “Journal of International Medical Research,” without asking for any changes to the scientific content sent an acceptance letter and an invoice for $3,100.5
Apart from the startling Bohannon experiment, many of the problems reported in my earlier articles seem to persist.6 In the biomedical area, for example, “published research findings are often modified or refuted by subsequent evidence.” There is an increasing concern of a publication “bias toward positive results,” a competition to “rush findings into print,” and an overemphasis on publishing “conceptual breakthroughs” in high-impact journals. Misleading papers result in considerable expenditure of time, money and effort by researchers “following false trails.”7 Leaders at the U.S. National Institutes of Health are planning “interventions” to ensure the reproducibility of biomedical research. There is, for example, the problem of what is not published. “There are few venues for researchers to publish negative data or papers that point out scientific flaws in previously published work.” Plus there is a difficulty in accessing unpublished data.8
As a practical matter, this means that testifying experts relying on published science papers often do not have complete information on the subject because the flaws and negative critiques come later and are largely unpublished. This reality hampers the challenging and cross-examining attorney in exposing the flaws and unreliability of the hearsay literature. Nor does the peer review process guarantee trustworthiness of the article.
The Bohannon sting operation described above vividly shows that peer review is often not performed at all or maybe done ineptly. Further, as Dr. Ioannidis has observed, peer review is not a guarantee of accuracy. The usual assignment of two qualified reviewers is a positive step, of course, but “journal reviewers don’t typically scrutinize raw data, re-run the statistical analyses, or look for evidence of fraud.” What they are reviewing, says Ioannidis, “are mostly advertisements of research rather than the research itself.”9
What does this state of affairs mean for the trial bench and bar? Well, experts ubiquitously rely on all kinds of hearsay claiming it is “professionally reliable.” But, as the famous song goes, “it ain’t necessarily so.” Counsel opposing or challenging the expert testimony have a major task: to uncover the indicia of unreliability of the hearsay literature, at least enough to put the onus or burden of proving its true reliability upon the testifying expert. Suspicions can abound when the expert has heavily relied on the article and parrots its conclusions. Similarly, where the article is the linchpin of the expert’s opinion, the testifier’s lack of knowledge about the details of peer review conducted for that article must be probed. Possible non-unanimity of the authors or lack of detailed knowledge about the raw data should be uncovered. If unpublished negative comments, flaws or critiques were rendered by other scientists, these must be disclosed.
Judges, too, have an important policing role to play. They, too, must be made aware of the vulnerabilities, if any, inherent in the hearsay article. “Trial by literature” tends to pay homage to the writing, but the article cannot be cross-examined. Its truthfulness, its bias, its frailties are not really known. This “article worship” presents dangers that threaten the reliability standard as never before. Litigators certainly have their work cut out for them. However, the sources cited here and in our earlier articles about practical realities in the publishing world, as opposed to fantasy, should be helpful in shaping incisive advocacy. Reliability is a quest worth the fight. Mere publication does not equal reliability.
Michael Hoenig is a member of Herzfeld & Rubin.
Endnotes
- See M. Hoenig, “Testifying Experts and Scientific Articles: Reliability Concerns,” New York Law Journal, Sept. 16, 2011, p. 3 (citing prior articles on experts’ use of unreliable hearsay, scientific papers questioning the reliability of biomedical articles, and reporting serious shortcomings even in those that were peer reviewed); “Gatekeeping of Experts and Unreliable Literature,” NYLJ, Sept. 12, 2005, p.3. The articles are also available on LEXIS.
- See quotations, sources and cites in my articles listed at n. 1 supra.
- John Bohannon, “Who’s Afraid of Peer Review?,” http://www.sciencemag.org/content/342/6154/60.full www.sciencemag.org, Science 4 October 2013: Vol. 342 no. 6154 pp. 60—65 DOI: 10.1126/science.342.6154.60
- See Claire Shaw, “Hundreds of Open Access Journals Accept Fake Science Paper,” The Guardian, Oct. 4, 2013; http://www.theguardian.com/higher-education-network/2013/oct/04/open-access-journals-fake-paper.
- See Bohannon and Shaw articles at notes 3 and 4, supra.
- See. e.g., Sarah Fecht, “What Can We Do About Junk Science,” Popular Mechanics, April 8, 2014,http://www.popularmechanics.com/science/health/what-can-we-do-about-junk-science-16674140?click=main_sr; Henry I. Miller and Bruce Chassy, “Scientists Smell a Rat In Fraudulent Genetic Engineering Study,” Forbes, Sept. 25, 2012 (Op/Ed), http://www.forbes.com/sites/henrymiller/2012/09/25/scientists-smell-a-rat-in-fraudulent-genetic-engineering-study/; Beate Wieseler and Others, “Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data,” PLOS Medicine, Oct. 8, 2013, http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1001526; David F. Freedman, “Lies, Damned Lies, and Medical Science,” Atlantic, Oct. 4, 2010,http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/; Francis S. Collins and Lawrence A. Tabak, “Policy: NIH Plans to Enhance Reproducibility,” Nature, Jan. 27, 2014,http://www.nature.com/news/policy-nih-plans-to-enhance-reproducibility—1.14586 (article by leaders of the U.S. National Institutes of Health; “checks and balances that once ensured scientific fidelity have been hobbled”; article outlines “interventions” planned by the NIH to ensure reproducibility of biomedical research);
- Editorial, “Further Confirmation Needed,” Nature Biotechnology, Sept. 10, 2012,http://www.nature.com/nbt/journal/v30/n9/full/nbt.2335.html.
- Francis S. Collins and Lawrence A. Tabak, “Policy: NIH Plans to Enhance Reproducibility,” Nature, Jan. 27, 2014, supra, n. 6.
- Sarah Fecht, “What Can We Do About Junk Science?” supra, n. 6.