Scientific publishing and the prevailing “publish or perish” paradigm within academia have long been considered integral to the advancement of knowledge and the progression of academic careers. However, recently both systems have come under increasing scrutiny for their shortcomings and their adverse effects on professional integrity, academic careers, and the broader scientific community, including the likes of the Stanford University President and the scientists at Dana-Farber Cancer Institute. Many scientific papers are being retracted.
A major revamp is needed to address the issues that are plaguing the dissemination of research, which is the very foundation of the advancement of civilization. The formal publication process has become a disservice to the research community.
While the peer review process is, in theory, essential for maintaining quality standards, in practice it is often marred by biases, inconsistencies, sloppiness, and more importantly, unreasonable delays that lead to unreliable and sometimes outdated outcomes that hinder scientific progress. When rapid developments in some areas like generative AI are being tracked in days and hours, the review process of articles even in the same or similar areas of study takes several months. Uploading preprints is a weak workaround because preprints do not come with the same quality guarantees as the published articles.
A primary reason is the voluntary nature of reviews. Some journals expect reviewers to almost work with the authors, revision after revision to spruce up the manuscripts. Some journals make them accountable by listing the reviewers’ names on the publication but do not pay the reviewers for their extraordinary service. Recently, an article that I rejected twice due to the poor quality of the manuscript was sent for my review a third time because one or more other reviewers suggested revisions instead of rejecting it.
The current scientific publishing model also lacks accessibility and transparency. Scientific literature is often locked behind paywalls, making it inaccessible to the general public and even to researchers from institutions without subscription access. This restricts the dissemination of knowledge and impedes the progress of science, as researchers are unable to build upon existing findings without access to the full breadth of literature in their field even in this fast-paced day of the Internet and AI.
The current open access model where the author pays for publication is no panacea either because it restricts the publication channels for authors who have budget constraints. Ironically, authors need to pay instead of getting remunerated for contributing their intellectual property.
A better business model may be for the publishers to monetize the intellectual property rights of the authors, in addition to the subscription model, and use portions of the revenue to remunerate reviewers for sure and possibly authors as well.
The “publish or perish” paradigm in academia complicates these issues further. Scholars are under constant pressure to publish frequently in high-impact journals to secure funding, tenure, and career advancement.
This promulgates a culture of hyper-competition and possible academic misconduct if researchers decide to prioritize publication metrics over the rigor and integrity of their work. For instance, in arecently published book chapter, I talked about how some highly cited research papers claimed up to 100% accuracy at detecting misinformation – but we all know the problem of misinformation is nowhere near being solved.
The business model of journal publishing needs to evolve to support open-access initiatives. The peer review process must be reformed to reward the reviewers and enhance speed, transparency, accountability, and fairness. Open peer reviews, where the review process and reviewers’ identities are disclosed, can certainly help mitigate biases and improve the quality of evaluations, but the increased responsibility on the reviewers must be suitably compensated.
I had to undergo prior training to be a high school science fair judge. Unfortunately, reviewers of highly impactful research do not have to go through any priming. Establishing clear guidelines and standards for peer review, along with providing training and support for reviewers, can go a long way in ensuring consistency and integrity in the process.
Along with my students, I recently submitted a research paper on how language models, particularly those used for Generative AI, can be used for peer reviews of research papers. The paper has been under review, since July 2023. To reduce the turnaround time, technology must be used to support the review process, particularly in this day of generative AI, as we discussed in a publisher-sponsored webinar. There are ways to address or work around the biases and other issues that the AI models come with.
The academic evaluation system must move away from overreliance on publication metrics and embrace a more utilitarian approach to assessing research impact and quality. A holistic approach could involve incorporating diverse metrics such as reproducibility, data sharing, outreach, and societal impact into academic evaluations, thereby incentivizing responsible and impactful research practices.
The current scientific publishing model and the “publish or perish” paradigm are overdue for a major overhaul to address their inherent flaws and promote a more robust, transparent, and equitable research ecosystem.
By evolving the business model for open-access publishing, reforming peer review practices, and adopting holistic evaluation metrics, we can foster a culture of scientific integrity, professionalism, and innovation that benefits researchers, institutions, and society as a whole. It’s time to reimagine the way we conduct, evaluate, and disseminate scientific research for the betterment of science and humanity, particularly in the age of Generative AI, which is expanding the technological horizons by the day.