Evaluating the impact of ideation and actualization of multidisciplinary research

Global events in the past year has made prescient a long-standing debate on the definition and suitability of impact and novelty as criteria for publication in selective journals. Reflecting on this issue, Prof Andrea Armani and Prof Jerry Lee argue that rigour and reproducibility is, in fact, more crucial. Global events in the past year has made prescient a long-standing debate on the definition and suitability of impact and novelty as criteria for publication in selective journals. Reflecting on this issue, Prof Andrea Armani and Prof Jerry Lee argue that rigour and reproducibility is, in fact, more crucial.

only way to share notable findings and encourage debate in the field. In many cases, reviewers not only read, but replicated, results before approving manuscripts for publication. Since that time, this standard has gradually eroded, in part, due to the increase in cost of replicating scientific work.
In parallel, the number of journals and the pressure to publish in high impact factor journals to ensure career advancement have rapidly increased. Combined, these changes represent tectonic shifts in the role of scientific publishing, transforming the landscape from a method to disseminate research into a news outlet. With this transformation came an increased emphasis on newsworthiness, hidden under the guise of "impact." As a result, in many journals, researchers and peer-reviewers are asked to shift their focus from evaluating the scientific quality and accuracy to predicting potential impact when making recommendations for acceptance. This change has resulted in a negative ripple effect throughout the scientific research community.
Because publication record influences funding, promotion, and award decisions, a researcher's career trajectory is altered based on a reviewer's impression of impact. Even with the current system reliance on three, or more, reviewers to balance interpretations of impact, it is often the case that we give more weight to negative comments. Given the "publish or perish" environment in academia, these decisions send a very clear message about what research is valued to younger scholars just launching their careers. As a result, early career researchers lean towards pursuing topics that are perceived as "high impact" or fashionable in order to ensure a strong career as measured in citations and publications. While understandable, this decision comes at the cost of pursuing hypothesis-driven, basic science research or more applied development efforts that emphasize societal benefit and reproducibility.
This contradiction was recently brought to the forefront by the National Institutes of Health (NIH). Prompted by the inability of biomedical researchers to reproduce published work, the NIH implemented drastic changes in 2015 to enhance the rigor and reproducibility of funded work. Resultingly, all applicants must describe the rigor of prior work that will be extended in their new proposals, and reviewers are asked to evaluate the scientific premise and scientific rigor when assessing overall impact. This push for increased reproducibility from funding agencies is being directly countered by the high impact factor journals, which tend to favor the attention-grabbing "hero" measurements. We also need to find the right balance to incentivize entrepreneurship and commercialization of initial discoveries so that societal impact can be realized and not just speculated.

Rewarding reproducibility
Academics are typically rewarded with high impact factor publications and with journal covers for discovering something new, something that has never been done before. While being "the first" is a great achievement, reducing a discovery to practice and repeatedly executing it in a real-world environment is equally-if not more-challenging. In the current publishing environment, not only is this type of success not similarly recognized, it is viewed negatively by traditional academic and publishing institutions. However, a 2018 meta-analysis across fields and journals revealed that higher impact factor journals not only had higher retraction rates but also lower reliability results in part due to poor scientific methods 1 . Therefore, replication of results with carefully designed studies serves a critical role in the scientific process. Moreover, without the ability to reliably reproduce a finding, a discovery can not be translated or used by society; thus, it's "impact" is fundamentally limited.
As a result, there is a fundamental disconnect between how funding agencies and publishers define impactful, yet both uniformly require impact for success. Due to this mismatch of expectations, junior investigators receiving feedback might be unintentionally discouraged from pursuing the critically needed research development efforts in order to validate and translate technologies. This is particularly common for interdisciplinary research, as adapting novel technology created to reduce known biological alterations in disease to potential practice can be viewed as not impactful and incremental. The first step in changing this perception is recognizing that reproducibility is a fundamental criterion for impact.
Discussions about the role of publishers in overseeing the scientific rigor of submitted manuscripts has been percolating for some time, and several journals, including Nature Journals, have created submission checklists that require authors to indicate if the included results were replicated. However, the details and statistics required to complete these forms are minimal, and the completed checklists are frequently not provided to the reviewers. Moreover, due to logistics of providing raw datasets to reviewers and the time required for independent analysis by reviewers, even in journals that require checklists, the reproducibility and rigor of the findings may still be challenging to determine. We need to re-evaluate the proper mesh size of impact filtration to avoid inundating the system and breaking the "evidence pipeline" 2 .
In acknowledgment of the limitations of the qualitative descriptor, recent works have attempted to create predictive models of impact to help guide both reviewers and funding agencies. One proposed strategy is a machine learning approach which relies on a retrospective analyses of past successes 3 . However, in science and engineering, surprising breakthroughs are the foundation of innovation. Therefore, an approach which is firmly anchored in the past is fundamentally limited in being able to predict the unpredictable.

Conclusions
With new metrics relying on social media counts and sound bites by news outlets, we have inadvertently, over time, shifted the focus of the review system from rigorously reviewing the scientific evidence presented in submissions to asking reviewers, who themselves have limited bandwidth, to predict whether or not the publication might go viral. One approach is having researchers indicate the readiness of their contributions for translation. In other words, should a submission be viewed as an "initial discovery" or as a "discovery translation." An example of this approach was done in 2014 to raise awareness of maturing the use of proteomic approaches to develop targeted assays for biomedical use 4 . The recognition of the importance of technology would mirror ongoing changes occurring in the political funding landscape, such as the reorganization of the US National Science Foundation to the National Science and Technology Foundation 5 .
If we continue to reward sparkle over substance, the next generation of researchers may become a generation of scientific "content creators" and strive to be "influencers" at the cost of performing rigorous science or applying knowledge to create true societal impact.