Skip to Content

More on Experts and ‘Unreliable’ Articles

November 18, 2015 in  News

New York Law Journal 

Over the years, this column has featured much discussion on a growing trend I have called “Trial By Literature.”1 This phenomenon involves experts testifying about the content of published articles they did not author and about results of research they did not perform. It is as if the hearsay article itself is “testifying.” The author or researcher is not. Usually, they are not even in the courtroom. The author is unavailable to be cross-examined. Instead, the testifier-expert acts as a conduit for those article excerpts the testifier elects to use or to emphasize. This dynamic presents awesome problems for lawyers opposing or challenging the expert’s testimony. Judges, too, are challenged mightily for they are “gatekeepers” of reliability of expert testimony.

It is axiomatic that experts testifying on scientific and technical subjects must meet standards of evidentiary reliability.2 But what happens when the hearsay literature the expert-testifier wishes to use actually amounts to “junk science”? What if the findings, foundations, methodologies, conclusions or opinions in the article are suspected of being unreliable? Worse, what if they were fraudulent or flawed? The testifier may attempt to vouch for the literature (or the authors) but that would amount to mere belief. The article is hearsay for the testifier as well. He didn’t write it nor did he do the research. So, how can he be the objective arbiter of the article’s reliability?

As my prior columns showed, these very practical challenges have been exacerbated with sensational revelations that even front-line, respected journals can have serious reliability issues. My September 2005 column reported that a noted epidemiologist concluded that most published biomedical research findings were false.3 The same column reported that the entire June 5, 2002, issue of the respected Journal of the American Medical Association (JAMA) was devoted to the question whether biomedical literature truly meets assumed standards of quality and trustworthiness. The investigation turned up numerous episodes of “appalling standards” of quality, despite peer review, which itself had flaws.

Similar assessments were reported in my September 2011 column,4 including a caution in an editorial by Trevor Ogden, chief editor of the Annals of Occupational Hygiene, a respected British journal. Ogden expressed frustrations over the way publications have been used in lawsuits. Science papers are “all about contributing to an ongoing debate as to how we must interpret certain observable facts.” Thus, a single paper “can never reveal the absolute truth.” Each paper must carefully discuss its own pros and cons. Peer review is only a “coarse and fallible filter” and some mistakes or shortcomings are likely.

Additional problems have appeared with the proliferation of so-called “open access” journals, hundreds of them. My May 2014 column reported on John Bohannon’s and Science Magazine’s “sting” operation which created a “hoax article” containing “grave errors” that was sent to 304 open access journals. One hundred fifty seven journals accepted the paper; only 97 journals rejected it. Some 60 percent of those that went through the editing process did not undergo peer review. Only 36 of the 304 submissions generated peer review comments recognizing any of the paper’s scientific problems.5 My June 2014 column cited many additional sources informing about peer review frailties.6

Have these problems persisted? Lamentably, the answer seems to be, “yes.” Under the title, “A Reporter Published a Fake Study to Expose How Terrible Some Scientific Journals Are,” Joseph Stromberg reports that a reporter for the Ottawa Citizen wrote a “plagiarized, completely incoherent paper about soils, cancer treatment and Mars.” Eight scientific journals wanted to publish it. Tom Spears, the reporter, wrote the paper as part of a “sting” operation to expose so-called “predatory” science journals, that is, “online-only, for-profit operations” that “take advantage” of inexperienced researchers under pressure to publish their work in any outlet “that seems superficially legitimate.” Such journals don’t conduct peer review. Spears built his sting article entirely from “unrelated phrases copied from legitimate existing research.” Then he sent it to 18 journals. Eight quickly responded offering to publish Spears’ work for a fee ranging from $1,000 to $5,000.7

Earlier in the year, Stromberg carried out a “similar sting” into a book publisher—a company that is for profit and does not conduct peer review, but publishes physical books of academic theses and dissertations. They contacted Stromberg offering to publish his undergraduate thesis for no fee. They gained the “permanent rights” to his work along with the ability to sell copies of it for “exorbitant prices online.” The publisher, however, failed to notice that Stromberg “stuck in a totally irrelevant sentence towards the end, highlighting the fact that they publish without proofreading or editing.” This is not an isolated incident. Over the years, the number of predatory journals has “exploded.” Jeffrey Beall, a librarian at the University of Colorado, keeps an up-to-date list of them (http://scholarlyoa.com/publishers/) to help researchers avoid being taken in.

Fabricated Data

Rachel Feltman reported in The Washington Post that “two scientific journals accepted a study by Maggie Simpson and Edna Krabappel,” a “nonsense paper” from a made-up university with author names borrowed from “The Simpsons” TV show. The opening summary of the Simpson-themed bogus paper stated:

The Ethernet must work. In this paper, we confirm the improvement of e-Commerce. WEKAU, our new methodology for forward-error correction, is the solution to all of these challenges.

In August 2015, Benedict Carey wrote an article in the N.Y. Times, “Many Psychology Findings Not As Strong As Claimed, Study Says.” Carey reported that a “star social psychologist was caught fabricating data, leading to more than 50 retracted papers.” Further, a “painstaking…effort to reproduce 100 studies published in three leading psychology journals” has found that “more than half of the findings did not hold up when retested.”

The conclusions from the analysis were reported in the journal Science, confirming the “worst fears of scientists who have long worried that the field needed a strong correction.” The study found no evidence of fraud or that any study was “definitively false.” Rather, the analysis concluded that the evidence for most published findings “was not nearly as strong as originally claimed.” The report appears at a time when, says Carey, “the number of retractions of published papers is rising sharply in a wide variety of disciplines.”

Even the prestigious New England Journal of Medicine published a recent piece titled, “Deception by Research Participants, “8 noting studies suggesting that “misconduct by research participants is a serious problem in clinical trials that provide financial compensation (e.g., for a participant’s time and inconvenience).” In one study, 25 percent of participants admitted to exaggerating symptoms in order to qualify for enrollment in a study; 14 percent admitted to pretending that they had a health problem they did not have; and high percentages admitted failure to disclose important information such as enrollment in another study, health problems, prescription drug use and recreational drug use. Fabrication or falsification of information by participants “can undermine the integrity of a study by biasing the data.”

The authors say that results can be significantly affected even if a small percentage of participants pretend to have the disease. Since these persons will be “destined to succeed,” the study’s “statistical power” and “apparent effect size” can be substantially reduced. This can result in pharmaceutical companies inappropriately discontinuing the development of effective medications, preventing patients from receiving valuable new treatment options. Similarly, results related to safety could be affected when healthy participants falsify their medical history to qualify for a study. The fact that high percentages of deception can (and do) permeate clinical trials and, therefore, can skew reports or findings that ensue, demonstrates the potential for publication of unreliable studies, even by well-intentioned researchers.

Indeed, even the vaunted journal JAMA, on Oct. 13, 2015, published a “Notice of Retraction” by authors of an article published in the Feb. 6, 2013, issue. It seems that a “recent internal subanalysis” of the data revealed “anomalies” which triggered “an investigation and an admission of fabricated results” by Anna A. Ahimastos, who was both the first and corresponding author and was “responsible for data collection and integrity for the article.”

The retraction notice said that no other coauthors were involved in “this misrepresentation.” The writers of the retraction “recognize the seriousness of this issue and apologize unreservedly” to the editors, reviewers, and readers of JAMA. The notice says a system of “good clinical practice was in place; however, clinical governance and audit procedures will be reviewed and strengthened to minimize the chance of possible recurrence of such behavior.” Other studies in which Dr. Ahimastos had oversight of data collection and integrity were being examined.9

In an Oct. 21, 2015, article, called “Peer-Review Fraud—Hacking the Scientific Publication Process,” Dr. Charlotte D. Haug reported that in August, the publisher Springer retracted 64 articles from 10 different subscription journals “after editorial checks spotted fake e-mail addresses, and subsequent internal investigations uncovered fabricated peer review reports.” These retractions came only months after BioMed Central, an open-access publisher also owned by Springer, retracted 43 articles for the same reason. The writer of a blog called Retraction Watch, Alison McCook, said that the increasing number of retractions due to fabricated peer reviews was “officially becoming a trend.”

Dr. Haug’s article goes on to detail how it is possible to fake peer review. The writer seeking publication gives journals recommendations for peer reviewers of his manuscript, providing names and email addresses. But the addresses are ones the writer creates, so the requests to review go directly to him or his colleagues. “Not surprisingly, the editor would be sent favorable reviews—sometimes within hours after the reviewing requests had been sent out.” In one case involving a South Korean researcher, his confession to the fraud led to 28 articles retracted from various journals. These frauds are made more feasible when publishers allow or encourage authors to suggest reviewers. Even though many editors dislike this practice, it is “frequently used.”

Dr. Haug concludes that electronic manuscript-handling systems used by most journals are “as vulnerable to exploitation and hacking as other data systems.” Most electronic manuscript submission systems have “loopholes that can easily be hacked.” She suggests that “perverse incentive systems in Scientific publishing” (mostly) reward authors for publishing many articles and (mostly) reward editors for publishing them rapidly. This means that “new ways of gaming the traditional publication models will be invented more quickly than new control measures can be put in place.”

‘Researchers’ Privilege’

Obviously, the sense of gloom suggested by the foregoing realities mandates that lawyers prepare well in order to challenge experts relying on unreliable hearsay literature. Judges, too, have to be willing to let the gatekeeping task unfold with appropriate discovery directed to the testifier-expert and disclosure about the hearsay literature itself, and about the authors, their data, foundations and methodologies. These are not easy tasks but the objectives of obtaining reliable evidence and the search for the truth compel the effort.

To help dispel some of the gloom, a new law review article has burst on the scene shedding much-needed light on practical steps lawyers may need to take, and judges may need to allow when assessing the reliability of hearsay articles upon which experts attempt to rely. The article is called,  “Researchers’ Privilege: Full Disclosure.”10 The authors are Dr. Frank C. Woodside, III, a nationally known trial lawyer with a medical degree, and Michael J. Gray, an associate, both with a Cincinnati litigation firm. The authors have noted the alarming increase in the number of articles based on questionable methodology, studies containing improper statistical conclusions and results that cannot be replicated. They have observed the “epidemic of faulty research” exacerbated recently by the spread of low-quality academic journals and “pay-to-publish” journals that will publish virtually anything for a fee.

But Woodside and Gray’s article means to go beyond mere reporting on the “crisis of reliability.” They intend to spur efforts to do something about uncovering the flawed hearsay literature. They observe that peer review “does not work.” So, it is vital to disallow faulty research to go undetected. It is difficult, if not impossible, to evaluate published research findings “without access to the underlying information that researchers have in their possession.”

In considering the practicalities of getting that access, the authors ran into a construct called the “researchers’ privilege.” This principle is asserted to protect raw data and materials of a third-party researcher from disclosure. Some consider the privilege a subcategory of “academic privilege” or the “academic freedom privilege.” The authors found, however, that this so-called “privilege” enables flawed research “to remain hidden” rather than making it “more transparent and easier to evaluate.”11

Woodside and Gray observe that neither the common law nor any explicit federal or state statute protects research data. So, absent that, the public “has a right to a researcher’s data when such data is at issue in a lawsuit.” Indeed, say the authors, the time has come “to bury the researchers’ privilege once and for all.”12 The simple solution is: “courts should favor the disclosure of research data by third-party researchers. If some data needs to be protected, then courts can accomplish this by issuing confidentiality orders.”13

Conclusion

Unfortunately, the problem of flawed and unreliable scientific and technical literature persists. Peer review, when performed, does not solve the problems because of its many frailties. When experts rely upon or quote hearsay literature, they may be expressing “junk science.” The trustworthiness of the literature needs to be probed. The new law review article by Dr. Woodside and Mr. Gray urging demise of the so-called researchers’ privilege is a helpful resource in the quest for experts’ reliability.

Endnotes

  1.  See M. Hoenig, “‘Unreliable’ Articles: More on Peer Review’s Frailties,” New York Law Journal, June 9, 2014, p. 3; “‘Unreliable’ Articles, Trial By Literature Revisited,” NYLJ, May 12, 2014, p. 3; “Testifying Experts and Scientific Articles: Reliability Concerns,” NYLJ, Sept. 16, 2011, p. 3 (citing prior articles on experts’ use of unreliable hearsay, scientific papers questioning the reliability of biomedical articles, and reporting shortcomings even in those that were peer-reviewed); “Gatekeeping of Experts and Unreliable Literature,” NYLJ, Sept. 12, 2005, p. 3.
  2.  See Daubert v. Merrell Dow Pharms., 509 U.S. 579 (1993); Federal Rules of Evidence 702, 703; See generally, M. Hoenig, “Gatekeeping: Reliability of Expert Testimony Under Daubert (and Frye),” Chapter 14, in Vol. 2, “Preparing For and Trying The Civil Lawsuit” (N.Y. State Bar Ass’n; Editors-in-Chief: N.A. Goldberg & J.P. Freedenberg; 2nd ed.)
  3.  Hoenig, “Gatekeeping of Experts and Unreliable Literature,” NYLJ, Sept. 12, 2005, p. 3 (citing John P.A. Ioannidis, “Why Most Published Research Findings Are False,” Vol. 2, Issue 8, Public Library of Science Medicine (Aug. 30, 2005), DOI:10.1371/Journal.pmed.0020124).
  4.  “Testifying Experts and Scientific Articles: Reliability Concerns,” NYLJ, Sept. 16, 2011, p. 3.
  5.  Hoenig, “‘Unreliable’ Articles, Trial By Literature Revisited,” NYLJ, May 12, 2014, p. 3; see John Bohannon, “Who’s Afraid of Peer Review?”, http://www.sciencemag.org/content/342/6154/60.full www.sciencemag.org, Science 4 October 2013: Vol. 342 no. 6154 pp. 60-65 DOI: 10.1126/science.342.6154.60. See Claire Shaw, “Hundreds of Open Access Journals Accept Fake Science Paper,” The Guardian, Oct. 4, 2013; http://www.theguardian.com/higher-education-network/2013/oct/04/open-access-journals-fake-paper.
  6.   Hoenig, NYLJ, June 9, 2014, p. 3.
  7.  Spears’ sting paper was titled, “Acidity and Aridity: Soil Inorganic Carbon Storage Exhibits Complex Relationship with Low-ph Soils and Myeloablation Followed by Autologous PBSC Infusion.” One journal told Spears they had the piece reviewed by a soil expert and were willing to publish. Another journal described itself as “an International Research Online Journal publishing the double blind peer-reviewed research papers in all fields of multi-science.”
  8.  D.B. Resnik, D.J. McCann, “Deception By Research Participants,” 373 N. Eng. J. Med., No. 13, pp. 1192—1193 (Sept. 24, 2015).
  9.  “Notice of Retraction: Ahimastos AA, et al. Effect of Ramipril on Walking Times and Quality of Life Among Patients With Peripheral Artery Disease and /Intermittent Claudication: A Randomized Controlled Trial, JAMA 2013; 309(5): 453—460,” in 314 JAMA, No. 14, 1520 (Oct. 13, 2015).
  10.  F.C. Woodside, III & M.J. Gray, “Researchers’ Privilege: Full Disclosure,” 32 W. Mich. U. Cooley L. Rev. 1 (2015).
  11.  Woodside & Gray, Id. at 17—18.
  12.  Id. at 19.
  13.  Id. at 32—33.