Peer Review isn’t the TSA: Let’s Humanize It in the Name of Science

Public deification of some infallible abstraction called “science” does a disservice to real science.

What’s needed is not only more and better scientific studies, but also a renewed understanding of how knowledge is built.

From the headlines proclaiming a state of “crisis” in both social science research and scientific peer review, it might well seem that the lyrics from a Weird Al song have come to pass: “All you need to understand is everything you know is wrong!” Or, as the inimitable Mike Pesca put it on his podcast The Gist, “An interesting new study reveals that most studies aren’t interesting, or new, or particularly revealing.”

Something is certainly amiss in a system that has been widely assumed to ensure that published research findings across numerous scientific disciplines are accurate, honest, ethical, and sufficiently well designed to warrant seeing the light of day.

Even beyond the recent instances of breathtakingly brazen fraud, what underlies the “crisis” headlines seems to have less to do with bad actors, and more to do with what some are calling bad systems.

Something more than just concern over systemic flaws is driving a good number of these calls for the reform—or even the abandonment—of peer review. The language of “crisis” is in part an acquiescence to a public misunderstanding of scientific fact as immutable. Science is not always satisfyingly decisive, nor is its accumulation of facts steadily linear. For scientists (and science journalists) forced to reckon with the impatience of a public desire for conclusiveness, humanities scholarship offers useful lessons on how to live and breathe in a space of overwhelmingly vast interpretive possibility.

The problem isn’t so much that science is in crisis, but that our epistemological awareness is flabby. Yes, there are indeed systemic problems with peer review, but the response to it indicates public misunderstanding of what science does. Science isn’t usually a fight between theories, each study a blow in the service of one faction or another, the spectating public waiting impatiently for one or the other to just stay down, its validity thoroughly discredited by the victorious opposing theory.

The current hand-wringing over peer review is but a symptom of this underlying issue. Ivan Oransky and Adam Marcus pointed out this week that peer review does not actually do a particularly good job of any of the functions most people assume it does. They call it a “toothless”—if not an altogether useless—watchdog. A former head of the British Medical Journal goes even further, calling peer review a “sacred cow” badly in need of slaying. Others have responded to this supposed crisis by calling for a renewed emphasis on replicating studies. An over-emphasis on replication, though, carries its own potential pitfalls. Successful replication as a condition for publication not only limits what gets published, but sends the wrong signal, reinforcing what social psychologists call the illusion of exact replication. But journals are not alone in preferring certainty (or the appearance thereof) to ambiguity; it is the public perception that turns a necessarily flawed system into false dogmas about the reliability of so-called “scientific fact.”

Oransky and Marcus are right about the process of peer review, inasmuch as it cannot do the work of the TSA. Its primary function is not to detect deliberate fraud—which is a good thing, because a system relying so heavily on unpaid labor has nothing like the resources to combat willful fraud. (It might be worth noting that, all things considered, its rate of detection doesn’t seem to be much worse than the TSA’s, though that may not be high praise.) Comparing peer review to the TSA does raise the possibility of another similarity, though. The TSA’s critics have long pointed out that its primary function is to offer the public a sense of safety through an elaborate performance of “security theater.” Could it be that peer-review has an element of theatrical reassurance as well?[1]

The truth is that total security may not be possible, and the truth about truth is that it is rarely so gratifyingly conclusive as many of us like to assume. Brenda Maddox, widow of John Maddox, editor of Nature, recalls the time her husband was asked, “How much of what you print [in your magazine] is wrong?” His immediate reply: “All of it. That’s what science is about – new knowledge constantly arriving to correct the old.’” Most scientists themselves are keenly aware that almost all knowledge is provisional; public deification of some infallible abstraction called “science” does a disservice to real science. What’s needed is not only more and better scientific studies, but also a renewed understanding of how knowledge is built.

The problem of ambiguity is old news to humanists. As literary critic Terry Eagleton has written, even “[t]he past itself is alterable, since the future casts it in a new light. Whether John Milton belonged to a species which ended up destroying itself is up to us and our progeny. The future possibilities of Hamlet are part of the play’s meaning, even though they may never be realised. One of the finest English novels, Samuel Richardson’s 18th-century masterpiece Clarissa, became newly readable in the light of the 20th-century women’s movement.” In short, things change. The way we interpret things changes. What we call “facts” change. Knowledge is cumulative. And that’s okay.

It is understandable that scientists would want to avoid acknowledging the provisional nature of hard-won knowledge. For one thing, avowals of uncertainty within one’s own field can be risky, providing fodder for deniers of evolution and climate change and their ilk. But epistemological humility is not a trait to be used against scientists. It is the cornerstone of curiosity and progress. One of the benefits of a more serious emphasis on humanities education is that it will improve public scientific literacy and understanding.

So, what’s the solution to the current round of “crisis”? As Oransky and Marcus argue, evaluation of new findings should continue well beyond the point of its peer-reviewed publication, in places like PubMed Commons and PubPeer. But we would also do well to realize how much of this “crisis in science” is actually a crisis of blind faith in science. Like taking off our shoes in the airport security line, idolatry of so-called “scientific fact” may feel oddly reassuring, but it neglects what scientists know already: certainty is usually conditional. Application of the scientific method rarely closes the book on old questions, but merely writes a new page.

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInEmail this to someone


Footnotes

[1] It is interesting to observe that peer-reviewed science is a bulwark even the TSA itself relies on. In 2010, the TSA responded to criticism of a behavioral observation screening technique by promising more peer-reviewed investigation into its efficacy.