For scientists, publication in Nature is a career high-water mark. To make its pages, work must be deemed exceptionally important, with potentially transformative impact on scientific understanding. In 2006, a study of Alzheimer’s disease by the lead author Sylvain Lesné met those criteria: It suggested a new culprit for the illness, a molecule called Aβ*56, which seemingly caused dementia symptoms in rats. The study has since accrued more than 2,300 citations in the scientific literature and inspired years of follow-up work. But an investigation of the original paper and many others by Lesné, described last week in Science, identified numerous red flags indicating the possibility of data fraud. (Nature has added a note to the paper, saying that the work is under investigation and that its results should be treated with caution.)
Some of Lesné’s colleagues in the field had been wary of his work for quite some time. The Science article notes that Dennis Selkoe, an Alzheimer’s disease researcher at Harvard, had looked for Aβ*56 in human tissues, reporting that he’d come up empty in 2008. A news item on the Lesné revelations, posted last Friday to a site called Alzforum, reported that many other scientists said that “they tried but were unable to replicate the findings,” and never published those results. “We have always been skeptical of this work,” the McGill University professor and Alzheimer’s disease researcher Gerhard Multhaup declared in a comment on that post. “I have long since written off Aβ*56 as an artifact,” said Dominic Walsh, the head of an Alzheimer’s disease research unit for the global biotech firm Biogen. “We were skeptical about the data from the beginning,” added Christian Haasse, a scientist at the German Center for Neurodegenerative Diseases, in Munich.
After reaching out to Lesné, The Atlantic received an emailed statement from the University of Minnesota, where he is employed: “The University is aware that questions have arisen regarding certain images used in peer-reviewed research publications authored by University faculty Karen Ashe and Sylvain Lesné. The University will follow its processes to review the questions any claims have raised. At this time, we have no further information to provide.”
Not all unreplicable results are emblematic of wrongdoing, of course. Real findings can be difficult to reproduce, and even with best practices, false positives sometimes occur. But Lesné’s work had also raised suspicions of misconduct. In 2013, an anonymous poster on PubPeer, a forum for discussing potential flaws in published papers, pointed to possible image manipulation in a study that had come out the year before. Late last year, the forum highlighted similar concerns in other Lesné papers. Yet neither of these posts would lead to any formal inquiry, nor would any of the murmurings described above. The formal process of reviewing Lesné’s suspect published work, let alone retracting it, has only just begun—and the research community may wait years before it’s finished. Is the scientific whisper network always this inert?
Read: The real scandal about ivermectin
Science is an enterprise built on trust, and in general, scientists do not attribute to malice what could be equally well explained by ineptitude. Peer review is far from perfect, often failing utterly to do its job, and journals have a well-established bias toward publishing positive results. Errors in published works are legion, from errant inferences to inappropriate statistics. Voicing concerns over suspect results, however, is fraught with peril. Careers in academia are precarious, research communities can be small, and open criticism may garner enmity from colleagues who evaluate submitted papers and grant proposals. Scientists may even cite research they do not believe or trust, for the sake of appeasing publishers, funders, and potential reviewers. This might explain the large number of citations of Lesné’s work.
Researchers who have the audacity to go public with their concerns typically find that the reaction is anemic. Academic publications are the currency of scientific prestige, winning accolades for researchers and journals alike. The interests of a paper’s author are thus aligned, to some extent, with those of its publisher, and both may be reluctant to engage with criticism. Most suspect work is left to fester in the literature. When corrections do appear, they may be slow to be acknowledged; even retracted papers can haunt science from beyond the grave, accumulating citations long after their flaws have been exposed.
Science may be self-correcting, but only in the long term. Meanwhile, the triumph of dubious results increases research waste, and entire careers may be spent on chasing phantoms. A 2021 analysis found that a dismally small proportion of the experiments described in cancer papers could be repeated. A 2009 analysis of multiple surveys in which scientists were asked about their own or others’ misbehavior found that a significant proportion of researchers—perhaps one-fourth or one-third—say they have observed colleagues engaging in at least one questionable research practice, such as ignoring an outlier without due cause. And when Elisabeth Bik, one of the investigators who has examined Lesné’s work, performed an audit of more than 20,000 papers from biomedical research journals, she and her colleagues found that 3.8 percent contained “problematic figures” bearing hallmarks of inappropriate image duplication or manipulation.
Read: Scientific publishing is a joke
Poor practice and a degree of self-delusion explain much of this. Scientists are prone to pathological science, a form of motivated reasoning where they tend to stack the deck in favor of their pet hypotheses when analyzing or interpreting results. But simple bungling of data can end up looking quite a bit like fabrication; indeed, the demarcation line is rather nebulous. The 2009 study concluded that about 2 percent of scientists will admit to having participated in outright research fraud.
The Lesné affair shows how these problems are accepted as the somber status quo, even when doubts persist. The “Publish or perish” mantra of academia invites the worst possible outcomes: the dominance of false findings, spiraling research waste, and the alienation of the most diligent scientists. A culture of transparency, where honest errors are readily corrected and fraud impeded, would provide a lasting remedy, but it cannot take hold unless the perverse incentives of scientific success are reimagined. As it stands, there is nothing to be gained by questioning the work of others, but plenty of risk. Skepticism rarely leads to accountability, and whisper networks will not stem the tide of suspect research.
David Robert Grimes is an assistant professor at Trinity College Dublin, a fellow of the Committee for Skeptical Inquiry, and the author of Good Thinking: Why Flawed Logic Puts Us All at Risk and Why Critical Thinking Can Save the World.
Spread the word