The past year witnessed engineers and scientists working together to develop and to manufacture new diagnostics, therapeutics, and vaccines. And these inventions were not demonstrated once or twice in proof of concept experiments, but thousands and millions of times in homes and hospitals globally. This work became the embodiment of impact, as researchers posted findings on blogs, social media webpages, and pre-print servers. However, this approach resulted in a limited peer-review process. In an age where fiction is often presented as fact, the lack of peer-review on technical advances and limited broad distribution to the technical community are significant concerns. Ultimately, an over-reliance on pre-print servers compounded by rapid distribution via social media platforms limits analysis of the research’s rigor and has the possibility of resulting in premature acceptance of the findings.

However, not all research is published in peer-reviewed journals—first, it must satisfy a journal’s publication criteria. Nearly every journal states that it publishes “high impact and novel” research. But how are novelty and impact defined? At a recent COVID-19 Workshop at the Conference on Lasers and Electro-Optics (CLEO), panelists noted that, anecdotally, hundreds of manuscripts relevant to the global COVID-19 pandemic were submitted to journals and desk-rejected with the rationale that the science was not “novel.” Therefore, it is reasonable to ask what is meant by “high impact” research?

Perhaps it is time to re-evaluate how we, both as contributing academics and as volunteer peer-reveiwers, define impact and novelty as a criteria for publishing. At a fundamental level, the concept of impact is subject to significant bias as the word is poorly defined. While complementary discussions have been percolating for over a decade regarding the merits of using h-indices and journal impact factors as evaluation criterion, similar discussions have not occurred on this critical topic. Are journals evaluating impact on a technical field or on society at large? Or the potential market cap of an eventual technology? And what is the timescale for impact: immediate or long-term? Given the ambiguities, should “impact” really be a criterion for publication in a scientific journal?

Defining impact

Whether in publishing or proposal evaluation, the fact that the peer-review process is flawed is well-established. However, typically, we focus on the reviewers as individuals who are subject to bias. But what about the review system? In the 1600’s and 1700’s, publication in a scientific journal was an indication of the degree of technical accuracy of a research finding, and it was the only way to share notable findings and encourage debate in the field. In many cases, reviewers not only read, but replicated, results before approving manuscripts for publication. Since that time, this standard has gradually eroded, in part, due to the increase in cost of replicating scientific work.

In parallel, the number of journals and the pressure to publish in high impact factor journals to ensure career advancement have rapidly increased. Combined, these changes represent tectonic shifts in the role of scientific publishing, transforming the landscape from a method to disseminate research into a news outlet. With this transformation came an increased emphasis on newsworthiness, hidden under the guise of “impact.” As a result, in many journals, researchers and peer-reviewers are asked to shift their focus from evaluating the scientific quality and accuracy to predicting potential impact when making recommendations for acceptance. This change has resulted in a negative ripple effect throughout the scientific research community.

Because publication record influences funding, promotion, and award decisions, a researcher’s career trajectory is altered based on a reviewer’s impression of impact. Even with the current system reliance on three, or more, reviewers to balance interpretations of impact, it is often the case that we give more weight to negative comments. Given the “publish or perish” environment in academia, these decisions send a very clear message about what research is valued to younger scholars just launching their careers. As a result, early career researchers lean towards pursuing topics that are perceived as “high impact” or fashionable in order to ensure a strong career as measured in citations and publications. While understandable, this decision comes at the cost of pursuing hypothesis-driven, basic science research or more applied development efforts that emphasize societal benefit and reproducibility.

This contradiction was recently brought to the forefront by the National Institutes of Health (NIH). Prompted by the inability of biomedical researchers to reproduce published work, the NIH implemented drastic changes in 2015 to enhance the rigor and reproducibility of funded work. Resultingly, all applicants must describe the rigor of prior work that will be extended in their new proposals, and reviewers are asked to evaluate the scientific premise and scientific rigor when assessing overall impact. This push for increased reproducibility from funding agencies is being directly countered by the high impact factor journals, which tend to favor the attention-grabbing “hero” measurements. We also need to find the right balance to incentivize entrepreneurship and commercialization of initial discoveries so that societal impact can be realized and not just speculated.

Rewarding reproducibility

Academics are typically rewarded with high impact factor publications and with journal covers for discovering something new, something that has never been done before. While being “the first” is a great achievement, reducing a discovery to practice and repeatedly executing it in a real-world environment is equally—if not more—challenging. In the current publishing environment, not only is this type of success not similarly recognized, it is viewed negatively by traditional academic and publishing institutions. However, a 2018 meta-analysis across fields and journals revealed that higher impact factor journals not only had higher retraction rates but also lower reliability results in part due to poor scientific methods1. Therefore, replication of results with carefully designed studies serves a critical role in the scientific process. Moreover, without the ability to reliably reproduce a finding, a discovery can not be translated or used by society; thus, it’s “impact” is fundamentally limited.

As a result, there is a fundamental disconnect between how funding agencies and publishers define impactful, yet both uniformly require impact for success. Due to this mismatch of expectations, junior investigators receiving feedback might be unintentionally discouraged from pursuing the critically needed research development efforts in order to validate and translate technologies. This is particularly common for interdisciplinary research, as adapting novel technology created to reduce known biological alterations in disease to potential practice can be viewed as not impactful and incremental. The first step in changing this perception is recognizing that reproducibility is a fundamental criterion for impact.

Discussions about the role of publishers in overseeing the scientific rigor of submitted manuscripts has been percolating for some time, and several journals, including Nature Journals, have created submission checklists that require authors to indicate if the included results were replicated. However, the details and statistics required to complete these forms are minimal, and the completed checklists are frequently not provided to the reviewers. Moreover, due to logistics of providing raw datasets to reviewers and the time required for independent analysis by reviewers, even in journals that require checklists, the reproducibility and rigor of the findings may still be challenging to determine. We need to re-evaluate the proper mesh size of impact filtration to avoid inundating the system and breaking the “evidence pipeline”2.

In acknowledgment of the limitations of the qualitative descriptor, recent works have attempted to create predictive models of impact to help guide both reviewers and funding agencies. One proposed strategy is a machine learning approach which relies on a retrospective analyses of past successes3. However, in science and engineering, surprising breakthroughs are the foundation of innovation. Therefore, an approach which is firmly anchored in the past is fundamentally limited in being able to predict the unpredictable.

Conclusions

With new metrics relying on social media counts and sound bites by news outlets, we have inadvertently, over time, shifted the focus of the review system from rigorously reviewing the scientific evidence presented in submissions to asking reviewers, who themselves have limited bandwidth, to predict whether or not the publication might go viral. One approach is having researchers indicate the readiness of their contributions for translation. In other words, should a submission be viewed as an “initial discovery” or as a “discovery translation.” An example of this approach was done in 2014 to raise awareness of maturing the use of proteomic approaches to develop targeted assays for biomedical use4. The recognition of the importance of technology would mirror ongoing changes occurring in the political funding landscape, such as the reorganization of the US National Science Foundation to the National Science and Technology Foundation5.

If we continue to reward sparkle over substance, the next generation of researchers may become a generation of scientific “content creators” and strive to be “influencers” at the cost of performing rigorous science or applying knowledge to create true societal impact.