Academic science’s quality filter emanates authority but means little
By Olavo Amaral
The original article was published on Folha de S.Paulo.
You can bet that in any discussion of scientific data, sooner or later someone will invoke the “peer-reviewed article” argument, either to give credence to a statement or to discredit it, if such a review did not take place.
Peer review—approval by independent researchers before the publication of an article—has been considered a bastion of scientific research for decades (or more than a century, depending on the field), and for many, it defines what it is considered “science” and what is not.
In an iconic image from the March for Science in Washington in 2017, a sign reading “In peer review we trust”, an allusion to “In God we trust”, was seen in front of the Capitol. The substitution, however, is equivalent to exchanging one dogmatic belief for another.
“Peer-reviewed”, after all, only means that some people—usually two or three—analyzed an article and saw no reason to deny its publication. As the process usually occurs behind closed doors, we do not know who these people are, what opinions they expressed, or what they bothered to verify.
Aside from this, reviewers are not usually trained for the task, are given no direction on what to review and are not paid or rewarded for their work, thus having little support or encouragement as they undertake a review. It is not surprising that the agreement between different reviewers is minimal and sometimes borders on random.
As if that were not enough, reviewers act only at the end of the scientific process, when problems in data collection are impossible to fix. Worse still, reviewers work based on the authors’ report and generally do not have access to the original data, which prevents them from detecting most errors and omissions that may occur throughout a study.
If none of this makes you suspect that something is wrong, imagine applying the same logic in other areas. If an airline told you that it delegates its quality control to two or three experts who examine a few pages of a report of on the manufacturing of a plane that has already been built, would you board it?
The confidence of the scientific community in peer review is even more disconcerting given the scarce evidence of its impact on the scientific literature. Comparisons between preprints—articles posted before peer review—and their reviewed versions show that the differences in quality are small and that both the text and the main conclusions rarely change.
Regarding the filtering function, the failure of the system is even more striking. Nonsensical articles, with gross errors or preposterous conclusions, written as jokes, invariably end up being accepted somewhere. The problem is aggravated by the so-called “predatory journals”—journals that charge per publication and maximize their profits through a lack of rigor.
The COVID-19 pandemic is replete with examples of the weakness of the system. Supposedly peer-reviewed journals published far-fetched theories such as 5G technology being able to produce SARS-CoV-2. Meanwhile, journals with editors affiliated with Didier Raoult’ Institut Hospitalo-Universitaire Méditerranée Infection have become a biased showcase of studies advocating the use of hydroxychloroquine.
It would be easy to attribute the problem to low-quality journals, but the most notorious pandemic scandal hit The Lancet and the New England Journal of Medicine, the most respected medical journals in the world, which were forced to retract articles with data suspected of being fabricated by the company Surgisphere.
This is not surprising: although traditional journals tend to be more selective in accepting articles, there is nothing different about their review processes. In addition, the pressure to publish in these journals may encourage scientists to sugarcoat data to make their results more attractive. Thus, using “impact factor” as a quality criterion does not solve the problem: visibility and reliability are different things, after all.
In the Surgisphere situation, critics were quick to point out culprits, such as editors’ bias and reviewers’ haste. In truth, however, the review system itself is responsible, and that system, without access to the data or the process by which they were obtained, does not have the ability to identify well-devised frauds.
If peer review does not serve as a yardstick, what can be considered “scientifically supported”? The best answer, somewhat tautological, may be “scientific consensus”. However, identifying this is not always obvious. The positions of scientific institutions and societies are an approximation of this, but they have their political side—which in cases such as that of Brazilian medical associations usually flirt with syndicalism—and are far from bias-free.
The truth is that we do not have efficient institutional ways of defining what reliable science is, representing a huge gap in public debate. This is evident in the headache of fact-checking agencies that deal with the dozens of articles for and against early treatment for COVID-19, a question too complicated to be summarized as “#fact” or “#fake”.
There is much to do, therefore, to develop a hallmark that goes beyond the “peer-reviewed” seal. This will only be achieved if we overcome the belief that two or three reviewers examining a PDF is enough to assess the quality of a complex process such as scientific research.
Examples of success abound: audits, certifications and standard procedures are part of the routine of airports, hospitals and public infrastructure, and it is unclear why they are so rare in academic institutions. Even projects such as Wikipedia have more elaborate and robust review and correction processes than the anemic and opaque peer review of scientific articles.
Without better controls, academic research will remain vulnerable to fraud, errors and biases, fueling quackery with the “scientifically supported” seal. This is simply the natural consequence of believing in a process in which no one sees what is being done. As in the children’s story, the emperor is naked under the invisible clothes of peer review, and sometimes it takes a child, or a pandemic, to force us to admit it.
Os serviços deste site podem conter links para outros sites ou serviços on-line que são operados e mantidos por terceiros e que não estão sob controle ou são mantidos pelo Instituto Serrapilheira.
Os artigos deste site podem incluir conteúdo incorporado de outros sites.
O site armazena as informações pessoais que os usuários registram em nosso site as quais só podem ser acessadas e editadas pelos administradores do site.
O usuário pode solicitar que apaguemos todos os dados pessoais que mantemos sobre ele. Isso não inclui nenhum dado que somos obrigados a manter para fins administrativos, legais ou de segurança.
Nenhum dado sobre os visitantes que se inscrevem no site é negociado pelo Instituto Serrapilheira, sob nenhuma circunstância.
O Instituto Serrapilheira pode decidir alterar a sua política de uso de dados a qualquer momento e por sua exclusiva deliberação.