To the Editor — Reproducibility and transparency are key issues in any scientific field, and the area of nanobiotechnology and nanomedicine (nanobiomed) is no exception. However, having a mandatory reporting checklist as a requirement for publication is a delicate issue, which should be discussed and evaluated by the scientific community.
Here, we have examined the current status of manuscripts in the field with regards to the minimum information reporting in bio–nano experimental literature (MIRIBEL) criteria in order to identify the existing trends. To that end, we selected 100 manuscripts from 2018 in multidisciplinary journals that impact the field of nanobiomed.
Each manuscript was further evaluated and labelled according to two major categories: application-driven and technology-driven. Each major category was then subdivided into scientific fields: application-driven to oncology, infectious disease and cardiovascular disease; technology-driven to biomaterials, gene therapy and theranostics. It should be noted that these categories were defined post data mining — that is, once the manuscripts were selected, we labelled them according to the major scientific field that they cover (detailed in the Methods section). The selected manuscripts were also divided according to the research experience stage of the corresponding author, defined as the number of years that have passed since the senior author’s first publication as last author, usually corresponding to their first independent research paper: early (less than 12 years), intermediate (between 12 and 20 years) and late (more than 20 years). For each of these categories we counted 22, 49 and 29 papers, respectively. We then manually scored each of the manuscripts according to all of the individual parameters included in the MIRIBEL checklist provided for the different categories: material physico-chemical characterization, biological characterization and experimental details. Scores were normalized to obtain the percentage of parameters featured in every single manuscript for each category, separately. We note that the score takes into account only those criteria that are relevant and feasible for the specific research.
By projecting these scores on a three-dimensional (3D) scatterplot, we show that most of the manuscripts scored above 50% in all categories, which might stem from the fact that they were selected from rigorous journals. However, only a handful of manuscripts managed to reach the top 30% of the requirements in all three categories and those that came close to this ‘bar’ are from oncology, gene therapy or theranostics (Fig. 1a). This suggests that fulfilling all criteria may be challenging and field dependent.
Interestingly, there are no real biases toward application-driven or technology-driven fields of science in terms of the composition of criteria (Fig. 1b). However, when examining the specific scientific fields, we see that within the application-driven category, oncology scored higher than cardiovascular-related studies, and within the technology-driven category, theranostics scored higher than gene-therapy-related studies. This might suggest that these fields are either more mature or more highly funded and, therefore, meet more rigorous criteria.
Last, we looked into the distribution of scores according to research experience and it shows that, in the biological criteria, ‘early’ scored low compared to ‘intermediate’ and ‘late’ (p = 0.056, Fig. 1c). Now, research experience is a term that can signify the amount of funding received, but it also covers other parameters, such as the researcher’s collaborative network, their interdisciplinary capacity (in terms of personnel) and their actual long-term experience in exploiting a diversity of methodologies.
Together, these trends suggest that instead of the requirement that all criteria should be met, a reasonable percentage — based on the median of the specific field — should be achieved. In any case, the criteria should be implemented gradually to allow the community, as well as decision makers, to accommodate and make continuous adjustments to the criteria based on incoming information. Finally, it is important to ensure that in the process of making better science, we do not miss the next research breakthrough from a young faculty member that did not have the means or the necessary collaborations to meet all of the criteria.
Manuscripts database generation and curation
The data pool was collected from the Web of Science Core Collection via a manual electronic search using the ISI Web of Knowledge advanced searching tool. A search by topic was carried out at the first stage, using “nanoparticle” as the search term in the Web of Science categories, within the span of one year (2018), as we assumed that the field advances rapidly and the search criteria should be timely and relevant to the technological state of the art. After reviewing the database outputs, the search was extended to abstracts and further refined to “nanoparticle AND efficacy”, “nanoparticle AND in vivo” and “nanoparticle AND in vitro” queries to narrow down the research focus within the broad field of nanobiomed. In the few cases in which this multidisciplinary definition led to references that crossed field boundaries, a decision on where a paper belonged was made on the basis of our personal view of which was the dominant field. To remove possible bias related to different requisites for publication, original papers from high-impact journals were further extracted. In this context, the abstracts of the 572 papers that matched these search parameters were gathered and comparatively analysed. Several criteria were specified to further select the 100 adequate studies for inclusion in our analysis. The inclusion criteria were: dated from January 2018 to December 2018; includes an abstract; original research paper within the nanobiomed field; and reporting data within the three categories under discussion in the MIRIBEL paper — that is, material characterization, biological characterization and experimental details. The 100 papers were subsequently selected following their cross analysis and comparative evaluation for being simultaneously relevant to the three categories featured in the MIRIBEL paper and the analysis of the general research patterns.
To determine if there are biases towards the application-driven or technology-driven category, we performed Welch’s two-sample t-test. To find out if there is a significant difference between the average criteria scores in the three categories based on scientific field, we applied a one-way analysis of variance (ANOVA) test, followed by Tukey’s honest significant difference test (Tukey HSD) for performing multiple pairwise comparison between the means of the groups. To find out if there is a significant difference between the average criteria scores in the three categories based on research experience, we applied a one-way ANOVA test, followed by Tukey HSD.
About this article
Cite this article
Florindo, H.F., Madi, A. & Satchi-Fainaro, R. Challenges in the implementation of MIRIBEL criteria on nanobiomed manuscripts. Nat. Nanotechnol. 14, 627–628 (2019). https://doi.org/10.1038/s41565-019-0498-7
Artificial Intelligence and Machine Learning Empower Advanced Biomedical Material Design to Toxicity Prediction
Advanced Intelligent Systems (2020)
Nature Nanotechnology (2020)
Trends in Cancer (2020)