Main

Measuring performance and progress in any given field critically depends on the availability of meaningful outcome metrics. In domains such as athletics, this process is straightforward because the performance measurements (for example, the time it takes an athlete to run a given distance) exactly reflect the underlying interest (for example, which athlete runs a given distance the fastest?). In image analysis, the situation is much more complex. Depending on the underlying research question, vastly different aspects of an algorithm’s performance might be of interest (Fig. 1) and meaningful in determining its future practical applicability, particularly in clinical settings. If the performance of an image analysis algorithm is not measured using relevant validation metrics, no reliable statement can be made about how suitable the algorithm is for solving the proposed task, and the algorithm is unlikely to ever be used in the real world. Moreover, unsuitable algorithms could be wrongly regarded as the best-performing ones, sparking entirely futile resource investment and follow-up research while obscuring true scientific advancements. In determining new state-of-the-art methods and informing future directions, the use of validation metrics actively shapes the evolution of research. In summary, validation metrics are key for both measuring and informing scientific progress, as well as bridging the current chasm between image analysis research and its translation into practice.

Fig. 1: Examples of metric-related pitfalls in image analysis validation.
figure 1

a, An example of medical image analysis. Voxel-based metrics are not appropriate for detection problems. Measuring the voxel-level performance of a prediction yields a near-perfect sensitivity. However, the sensitivity at the instance level reveals that lesions are actually missed by the algorithm. Green metric values correspond to a good metric value, whereas red values correspond to a poor value. Green check marks indicate desirable behavior of metrics; red crosses indicate undesirable behavior. b, An example of biological image analysis. The task of predicting fibrillarin in the dense fibrillary component of the nucleolus should be phrased as a segmentation task, for which segmentation metrics reveal the low quality of the prediction. Phrasing the task as image reconstruction instead and validating it using metrics such as the Pearson correlation coefficient yields misleadingly high metric scores12,35,36,37,38.

In image analysis, for some applications, it might be sufficient to draw a box around the structure of interest (for example, detecting individual mitotic cells or regions with apoptotic cell debris) and, optionally, to associate that region with a classification (for example, distinguishing a mitotic versus an interphase cell); however, other applications (for example, cell tracing for fluorescent signal quantification) could require the exact structure boundaries to be determined. The suitability of any validation metric thus hinges on the properties of the driving problem. As a result, several metrics have so far been proposed in the image-processing field. In our previous work, we analyzed all biomedical-image-analysis competitions conducted within a period of about 15 years1. We found 97 metrics reported in the field of biomedicine alone, each with its own individual strengths, weaknesses and limitations and hence varying degrees of suitability for meaningfully measuring algorithm performance on a given research problem. The vast range of options makes tracking all related information impossible for any individual, and consequently renders the process of metric selection prone to error. Thus, it is not surprising that researchers often rely on flawed validation practices that have been traditionally used in the literature. To make matters worse, there is currently no comprehensive resource providing an overview of the relevant definitions, (mathematical) properties, limitations and pitfalls for a metric of interest. Although taking into account the individual properties and limitations of metrics is imperative for choosing suitable validation metrics, the required knowledge is largely inaccessible.

As a result, numerous flaws and pitfalls are prevalent in image analysis validation, and researchers are often unaware of them owing to a lack of knowledge of intricate metric properties and limitations. Accordingly, increasing evidence shows that metrics are often selected inadequately in image analysis (for example, refs. 2,3,4). In the absence of a central information resource, it is common for researchers to resort to popular validation metrics; however, these can be entirely unsuitable, for instance owing to a mismatch of the metric’s inherent mathematical properties with the underlying research question and specifications of the dataset at hand (see Fig. 1).

The present work addresses this important roadblock in image analysis research with a crowdsourcing-based approach involving both a Delphi process undertaken by a multidisciplinary consortium of experts and a social-media campaign. It represents a comprehensive collection, visualization and detailed discussion of pitfalls, drawbacks and limitations regarding validation metrics that are commonly used in image analysis. Our work provides researchers with a reliable, single point of access to this critical information. Owing to the enormous complexity of the matter, the metric properties and pitfalls are discussed in the specific context of classification problems, that is, image analysis problems that can be considered classification tasks at either the image, object or pixel level. Specifically, these encompass four problem categories: image-level classification, semantic segmentation, object detection and instance segmentation. Our contribution includes a dedicated profile for each metric (Supplementary Note 3), as well as the creation of a new common taxonomy that categorizes pitfalls in a domain-agnostic manner (Fig. 2). The taxonomy is depicted for individual metrics in provided tables (see Extended Data Tables 15) and enables researchers to quickly grasp whether a certain metric comes with pitfalls in a given use case.

Fig. 2: Overview of the taxonomy for metric-related pitfalls.
figure 2

Pitfalls can be grouped into three main categories: P1, pitfalls related to the inadequate choice of the problem category; P2, pitfalls related to poor metric selection; and P3, pitfalls related to poor metric application. P2 and P3 are further split into subcategories. For all categories, pitfall sources are presented (turquoise), with references to corresponding illustrations of representative examples. The order in which the pitfall sources are presented does not correlate with importance.

Although our work grew out of image analysis research and practice in the field of biomedicine, which is highly complex and particularly high stakes owing to its direct impact on patient health, we believe the identified pitfalls are transferable to other application areas of imaging research. It should be noted that this work focuses on identifying, categorizing and illustrating metric pitfalls; the sister publication of this work gives specific recommendations on which metrics to use in certain circumstances5.

Information on metric pitfalls is largely inaccessible

Researchers and algorithm developers seeking to validate image analysis algorithms often face the problem of choosing adequate validation metrics while navigating a range of potential pitfalls. Common practice is often not the best option, as demonstrated by several recent publications1,2,3,4. Making an informed decision is notably complicated by the absence of any comprehensive databases or reviews covering the topic, and thus the lack of a central resource for reliable information on validation metrics.

The lack of accessibility is recognized as a major obstacle in image analysis validation1. To illustrate this, we searched the literature for information on commonly used validation metrics. The search was conducted on the platform Google Scholar. We used search strings that combined various notations of metric names, including synonyms and acronyms, with terms indicating problems, such as ‘pitfall’ or ‘limitation.’ The mean and median number of hits for the metrics discussed in this work were 159,329 and 22,100, respectively, and ranged from 49 for the centerline dice similarity coefficient (clDice) to 962,000 for sensitivity. Moreover, despite the valuable literature on individual relevant aspects (for example, refs. 3,4,6,7,8,9,10), we did not find a common point of entry for metric-related pitfalls in image analysis, whether in the form of a review paper or another credible source. We conclude that essential knowledge for making informed decisions and avoiding pitfalls related to the use of validation metrics is highly scattered and cannot be easily accessed by individuals.

Established practices are not always justified

To obtain initial insights into current common practice regarding validation metrics, we prospectively captured the designs of challenges organized by the IEEE Society of the International Symposium of Biomedical Imaging, the Medical Image Computing and Computer Assisted Interventions Society and the Medical Imaging with Deep Learning foundation. The organizers were asked to provide their rationale for the choice of metrics used in these competitions. We analyzed 138 competitions held between 2018 and 2022 and found that metrics were frequently (in 24% of the competitions) chosen on the basis of common practices. We also found, however, that common practices are often not well-justified, and poor practices may even be passed across generations.

One remarkable example is the widespread adoption of an incorrect naming and inconsistent mathematical formulation of a metric proposed for cell instance segmentation. The term ‘mean average precision (mAP)’ usually refers to one of the most common metrics in object detection (object-level classification)11,12. Here, precision denotes the positive predictive value (PPV), which is ‘averaged’ over varying thresholds on the predicted class scores of an object detection algorithm. The ‘mean’ average precision (AP) is then obtained by taking the mean over classes12,13. Despite the popularity of mAP, a widely known challenge on cell instance segmentation (https://www.kaggle.com/competitions/data-science-bowl-2018/overview/evaluation) in 2018 introduced a new ‘mean average precision’ term. The task matches the task of the original ‘mean’ AP, object detection, but all terms in the newly proposed metric (mean, average and precision) refer to entirely different concepts. For instance, the common definition of precision from the literature, TP/(TP + FP), was altered to TP/(TP + FP + FN), where TP, FP and FN refer to the cardinalities of the confusion matrix (that is, the true and false positives (TP and FP) and negatives (TN and FN)). The latter formula actually defines the intersection over union (IoU) metric. Despite this problem, the terminology was adopted by subsequent influential works14,15,16,17, indicating its widespread propagation and usage within the community.

A multidisciplinary Delphi process reveals numerous pitfalls in biomedical-image-analysis validation

With the aim of creating a comprehensive, reliable collection and future point of access to metric definitions and limitations in biomedical image analysis, we formed an international multidisciplinary consortium comprising 62 experts in various fields related to biomedical image analysis. The consortium engaged in a multistage Delphi process18,19 to facilitate consensus building. The Delphi process involved multiple surveys, crafted by a coordinating team and completed by the remaining members of the consortium. On the basis of the survey results, the list of pitfalls was iteratively refined by collecting pitfall sources, specific feedback and suggestions on pitfalls, and final agreement on which pitfalls to include and how to illustrate them. Further pitfalls were crowdsourced through the publication of a dynamic preprint of this work12, as well as a social-media campaign, both of which asked the scientific community for contributions. This approach allowed us to integrate distributed, cross-domain knowledge on metric-related pitfalls within a single resource. In total, the process identified 37 distinct sources of pitfalls (Fig. 2). Notably, these pitfall sources (for example, class imbalances, uncertainties in the reference or poor image resolution) can occur irrespective of imaging modality or application. As a result, many pitfalls can be generalized across different problem categories in image processing (image-level classification, semantic segmentation, object detection and instance segmentation), as well as imaging modalities and domains. A detailed discussion of all pitfalls can be found in Supplementary Note 2.

A common taxonomy enables domain-agnostic categorization of pitfalls

One of our key objectives was to facilitate information retrieval and provide structure within this vast topic. Specifically, we wanted to enable researchers to quickly identify which metrics are affected by which types of pitfalls. To achieve this, we developed a comprehensive taxonomy that categorizes the different pitfalls in a semantic manner. The taxonomy was created in a domain-agnostic manner to reflect the generalization of pitfalls across different imaging domains and modalities. An overview of the taxonomy is presented in Figure 2, and the relations between the pitfall categories and individual metrics can be found in Extended Data Tables 15. We distinguish the following three main categories:

P1: pitfalls related to the inadequate choice of the problem category

A common pitfall occurs when metrics are applied to a problem category that they are not suited for, because they fail to fulfill crucial requirements of that problem category and hence do not reflect the domain interest (Fig. 1). For instance, popular voxel-based metrics, such as the Dice similarity coefficient (DSC) or sensitivity, are widely used in image analysis problems, although they do not fulfill the critical requirement of detecting all objects in a dataset (Fig. 3). In a cancer-monitoring application, they fail to measure instance progress, that is, the potential increase in the number of lesions (Fig. 1), which can have serious consequences. For some problems, there may even be a lack of matching problem category (Fig. SN 2.2), rendering common metrics inadequate. We present further examples of pitfalls in this category in Supplementary Note 2.1.

P2: pitfalls related to poor metric selection

Pitfalls in this category occur when a validation metric is selected without considering specific properties of the given research problem or method that make this metric unsuitable in the particular context. P2 can be further divided into the following four subcategories:

Fig. 3: P1: pitfalls related to the inadequate choice of the problem category.
figure 3

The effect of using segmentation metrics for object-detection problems. The pixel-level DSC of a prediction recognizing every structure (Prediction 2) is lower than that of a prediction that only recognizes one of the three structures (Prediction 1).

P2.1: disregarding the domain interest

Frequently, several requirements stemming from the domain interest of the underlying research problem may clash with particular metric limitations. For example, if there is particular interest in the structure boundaries, it is important to know that overlap-based metrics, such as the DSC, do not take the correctness of an object’s boundaries into account, as shown in Figure 4a. Similar issues may arise if the structure volume (Fig. SN 2.4) or center(line) (Fig. SN 2.5) are of particular interest. Other domain-interest-related properties could include an unequal severity of class confusions. This could be important in an ordinal grading use case, in which the severity of a disease is categorized by different scores. Predicting a low disease severity for a patient who actually has a severe disease should incur substantial penalties, a requirement not met by common classification metrics. An example is provided in Figure 4b. At the pixel level, this property relates to an unequal severity of over- versus undersegmentation. In applications such as radiotherapy, it may be highly relevant whether an algorithm tends to over- or undersegment the target structure. Common overlap-based metrics, however, do not represent over- and undersegmentation equally20. Further pitfalls may occur if confidence awareness (Fig. SN 2.6), comparability across datasets (Fig. SN 2.7) or a cost–benefit analysis (Fig. SN 2.9) are of particular importance, as illustrated in Supplementary Note 2.2.1.

Fig. 4: P2.1: disregard of the domain interest.
figure 4

a, Importance of structure boundaries. The predictions of two algorithms (Predictions 1 and 2) capture the boundary of the given structure in substantially different ways, but lead to the same DSC owing to the metric’s boundary unawareness. This pitfall is also relevant for other overlap-based metrics such as clDice, pixel-level Fβ score and IoU, as well as localization criteria such as box/approx/mask IoU, center distance, mask IoU > 0, point inside mask/box/approx and intersection over reference. b, Unequal severity of class confusions. When predicting the severity of a disease for three individuals in an ordinal classification problem, Prediction 1 assumes a much lower severity for Patient 3 than is actually observed. This critical issue is overlooked by common metrics (here, accuracy), which measure no difference with respect to Prediction 2, which assesses the severity much better. Metrics with pre-defined weights (here, expected cost (EC)) correctly penalize Prediction 1 much more than Prediction 2. This pitfall is also relevant for other counting metrics, such as BA, Fβ score, positive likelihood ratio (LR+), Matthew’s correlation coefficient (MCC), net benefit (NB), negative predictive value (NPV), positive predictive value (PPV), sensitivity and specificity.

P2.2: disregard of the properties of the target structures

For problems that require local properties to be captured (object detection, semantic or instance segmentation), the properties of the target structures to be localized and/or segmented may have important implications for the choice of metrics. Here, we distinguish between size-related and shape- and topology-related pitfalls. Common metrics, for example, are sensitive to structure sizes, such that single-pixel differences can hugely impact the metric scores, as shown in Extended Data Figure 1a. Shape- and topology-related pitfalls can arise from the fact that common metrics disregard complex shapes (Extended Data Fig. 1b) or that boundary boxes do not capture the disconnectedness of structures (Fig. SN 2.14). A high variability of structure sizes (Fig. SN 2.11) and overlapping or touching structures (Fig. SN 2.13) can also influence metric values. We present further examples of P2.2 pitfalls in Supplementary Note 2.2.2.

P2.3: disregard of the properties of the dataset

Several properties of the dataset, such as class imbalances (Fig. 5a), small sample size (Fig. 5b) or reference-annotation quality, can directly affect metric values. Common metrics such as balanced accuracy (BA), for instance, might yield a very high score for a model that predicts many FP samples in an imbalanced setting (see Fig. 5a). When only small test datasets are used, common calibration metrics (which are typically biased estimators) either underestimate or overestimate the true calibration error of a model (Fig. 5b)21. However, metric values can be impacted by reference annotations (Fig. SN 2.17). Spatial outliers in the reference might have a huge impact on distance-based metrics, such as the Hausdorff distance (HD) (Fig. 5c). Additional pitfalls can arise from the occurrence of cases with an empty reference (Extended Data Fig. 2b), causing division-by-zero errors. We present further examples of P2.3 pitfalls in Supplementary Note 2.2.3.

Fig. 5: P2.3: disregard of the properties of the dataset.
figure 5

a, High class imbalance. In the case of underrepresented classes, common metrics may yield misleading values. In the given example, accuracy and BA have a high score despite the large amount of FP samples. The class imbalance is uncovered only by metrics considering predictive values (here, MCC). This pitfall is also relevant for other counting and multi-threshold metrics, such as AUROC, EC (depending on the chosen costs), LR+, NB, sensitivity, specificity and weighted Cohen’s kappa. b, Small test set size. The values of the expected calibration error (ECE) depend on the sample size. Even for a simulated perfectly calibrated model, the ECE will be substantially greater than zero for small sample sizes21. c, Imperfect reference standard. A single erroneously annotated pixel can lead to a large decrease in performance, especially in the case of the Hausdorff distance (HD) when applied to small structures. The HD 95th percentile (HD95), however, was designed to deal with spatial outliers. This pitfall is also relevant for localization criteria such as box/approx IoU and point inside box/approx.

P2.4: disregard of the properties of the algorithm output

Reference-based metrics compare the algorithm output with a reference annotation to compute a metric score. Thus, the content and format of the prediction are of high importance when considering metric choice. Overlapping predictions in segmentation problems, for instance, can return misleading results. In Extended Data Figure 2a, the predictions only overlap to a certain extent, not representing that the reference instances actually overlap substantially. This is not detected by common metrics. Another example is empty predictions that can cause division-by-zero errors in metric calculations, as illustrated in Extended Data Figure 2b, or the lack of predicted class scores (Fig. SN 2.20). We present further examples of P2.4 pitfalls in Supplementary Note 2.2.3.

P3: pitfalls related to poor metric application

Selected metrics need to be applied to an image or an entire dataset. This step is not straightforward and comes with several pitfalls. For instance, when aggregating metric values over several images or patients, a common mistake is ignoring the hierarchical data structure, such as data from several hospitals or varying numbers of images from each patient. We present three examples of P3 pitfalls in Figure 6; for more pitfalls in this category, please refer to Supplementary Note 2.3. P3 can be further divided into five subcategories that are described in the following paragraphs.

Fig. 6: P3: pitfalls related to poor metric application.
figure 6

a, Non-standardized metric implementation. In the case of the AP metric and the construction of the PR curve, the strategy of how identical scores (here, a confidence (Conf.) score of 0.80 is present twice) are treated has a substantial impact on the metric scores. Microsoft COCO11 and CityScapes23 are used as examples. b, Non-independence of test cases. The number of images taken from Patient 1 is much higher compared than the numbers taken from Patients 2–5. Averaging over all DSC values () results in a high averaged score. Aggregating metric values per patient reveals much higher scores for Patient 1 than for the others, which would have been hidden by simple aggregation. c, Uninformative visualization. A single box plot (left) does not give sufficient information about the raw metric value distribution. Adding the raw metric values as jittered dots on top (right) adds important information (here, on clusters). In the case of non-independent validation data, color- or shape-coding helps reveal data clusters.

P3.1: inadequate metric implementation

Metric implementation is, unfortunately, not standardized. As shown by ref. 22, different researchers typically employ various implementations for the same metric, which can yield a substantial variation in the metric scores. Although some metrics can be implemented in a straightforward way, others require more advanced techniques and offer different possibilities. In the following, we provide some examples for inadequate metric implementation:

  • How identical confidence scores are handled in the computation of the AP metric might lead to substantial differences in metric scores. Microsoft Common Objects in Context (COCO)11, for instance, processes each prediction individually, whereas CityScapes23 processes all predictions with the same score in one joint step. Figure 6a provides an example in which two predictions have the same confidence score; the final metric scores differ depending on the chosen handling strategy for identical confidence scores. Similar issues can arise with other curve-based metrics, such as area under the receiver operating characteristic curve (AUROC), AP or free-response receiver operating characteristic scores (see, for example, ref. 24).

  • Metric implementation might be subject to discretization issues, such as the chosen discretization of continuous variables, which can cause differences in the metric scores, as illustrated in Figure SN 2.22.

  • For metrics assessing structure boundaries, such as the average symmetric surface distance (ASSD), the exact boundary extraction method is not standardized. Thus, for example, the boundary extraction method implemented by the Liver Tumor Segmentation challenge25 and that implemented by Google DeepMind (https://github.com/deepmind/surface-distance) could produce different metric scores for the ASSD. This is especially critical for metrics that are sensitive to small contour changes, such as the HD.

  • Suboptimal choices of hyperparameters can also lead to metric scores that do not reflect the domain interest. For example, the choice of a threshold on a localization criterion (Fig. SN 2.23) or the chosen hyperparameter for the Fβ score will heavily influence the subsequent metric scores26.

More P3.1 pitfalls can be found in Supplementary Note 2.3.1.

P3.2: inadequate metric aggregation

A common pitfall with respect to metric application is to simply aggregate metric values over the entire dataset and/or all classes. As detailed in Figure 6b and Supplementary Note 2.3.2, important information can be lost in this process, and metric results can be misleading. For example, the popular TorchMetrics framework calculates the DSC metric by default as a global average over all pixels in the dataset without considering their image or class of origin (https://torchmetrics.readthedocs.io/en/stable/classification/dice.html?highlight=dice). Such a calculation eliminates the possibility of interpreting the final metric score with respect to individual images and classes. For example, errors in small structures might be suppressed by correctly segmented larger structures in other images (see for example, Fig. SN 2.26). An adequate aggregation scheme is also crucial for handling hierarchical class structure (Fig. SN 2.27), missing values (Fig. SN 2.29) and potential biases (Fig. SN 2.28) of the algorithm. Further P3.2 pitfalls are shown in Supplementary Note 2.3.2.

P3.3: inadequate ranking scheme

Rankings are often created to compare algorithm performance. In this context, several pitfalls pertain to either metric relationships or ranking uncertainty. For example, to assess different properties of an algorithm, it is advisable to select multiple metrics and determine their values. However, the chosen metrics should assess complementary properties and should not be mathematically related. For example, the DSC and IoU are closely related, so using both in combination would not provide any additional information over using either individually (Fig. SN 2.30). In this context, unawareness of metric synonyms can be equally misleading. Metrics can have different names; for instance, sensitivity and recall refer to the same mathematical formula. Despite this seemingly trivial fact, an analysis of 138 biomedical-image-analysis challenges5 found 3 challenges that unknowingly used two versions of the same metric to calculate their rankings. Moreover, rankings themselves can be unstable (Fig. SN 2.31)1,27; rankings are highly sensitive to alterations of the metric aggregation operators, the underlying dataset or the general ranking method. Thus, if the robustness of rankings is disregarded, the winning algorithm might be identified by chance, rather than by true superiority.

P3.4: inadequate metric reporting

A thorough reporting of metric values and aggregates is important in terms of both transparency and interpretability. However, several pitfalls should be avoided. Notably, different types of visualization may vary substantially in terms of interpretability, as shown in Figure 6c. For example, although a box plot provides basic information, it does not depict the distribution of metric values. This may conceal important information, such as specific images on which an algorithm performed poorly. Other pitfalls in this category relate to the non-determinism of algorithms, which introduces a natural variability to the results of a neural network, even with fixed seeds (Fig. SN 2.32). This issue is aggravated by inadequate reporting, for instance, reporting solely the results from the best run instead of proper cross-validation and reporting of the variability across different runs. Generally, shortcomings in reporting, such as providing no s.d. or confidence intervals in the presented results, are common. Concrete examples of P3.4 pitfalls can be found in Supplementary Note 2.3.4.

P3.5: inadequate interpretation of metric values

Interpreting metric scores and aggregates is an important step for the analysis of algorithm performance. However, several pitfalls can arise from interpretation. In rankings, for example, minor differences in metric scores might not be relevant from an application perspective but might still yield better ranks (Fig. SN 2.36). Furthermore, some metrics do not have upper or lower bounds, or the theoretical bounds may not be achievable in practice, rendering interpretation difficult (Fig. SN 2.35). More information on interpretation-based pitfalls can be found in Supplementary Note 2.3.4.

An illustrated common access point to metric definitions and pitfalls

To underline the importance of a common access point to metric pitfalls, we conducted a search for individual metric-related pitfalls on the platforms Google Scholar and Google, with the aim of determining how many of the pitfalls that we identified could be located in existing resources. We were able to locate only a portion of the identified pitfalls in existing literature (68%) or online resources, such as blog posts (11%; 8% were found in both). Only 27% of the pitfalls that we located were presented visually.

Our work now provides this key resource in a highly structured and easily understandable form. Supplementary Note 2contains a dedicated illustration for each of the pitfalls discussed, thus facilitating reader comprehension and making the information accessible to everyone regardless of their level of expertise. A further core contribution of our work is the metric profiles presented in Supplementary Note 2, which, for each metric, summarize the most important information deemed of particular relevance by the Metrics Reloaded consortium that produced this publication’s sister work5. The profiles provide the reader with a compact, at-a-glance overview of each metric and an explanation of the limitations and pitfalls identified in the Delphi process that we conducted.

Discussion

Flaws in the validation of algorithms for biomedical image analysis significantly impede the translation of methods into (clinical) practice and undermine the assessment of scientific progress in the field28. They are often caused by poor choices in disregarding the specific properties and limitations of individual validation metrics. This work represents a comprehensive collection of pitfalls and limitations when using validation metrics in image-level classification, semantic segmentation, instance segmentation and object-detection tasks. Our work enables researchers to gain a deep understanding of and familiarity with both the overall topic and individual metrics by providing a common access point to key information—which researchers can use when validating image analysis algorithms—that was previously largely scattered and inaccessible. Our work aims to disrupt the current common practice of choosing metrics on the basis of their popularity rather than their suitability to the underlying research problem. This practice, which, for instance, often manifests in the unreflected and inadequate use of the DSC, is concerningly prevalent, even among prestigious, high-quality biomedical-image-analysis competitions1,2,3,4,29,30,31,32. The educational aspect of our work is complemented by dedicated ’metric profiles’ that detail the definitions and properties of all the discussed metrics. Our work pioneers the examination of artificial intelligence (AI) validation pitfalls in the biomedical domain, in which they are arguably more critical than in many other areas. Flaws in biomedical algorithm validation can directly affect patient well-being and safety.

We propose that shortcomings in current common practice are characterized by the low accessibility of information on the pitfalls and limitations of commonly used validation metrics. A literature search conducted from the point of view of a researcher seeking information on individual metrics confirmed that the number of search results far exceeds any amount that could be examined in a reasonable time and with realistic effort, as well as the lack of a common point of entry to reliable metric information. Even when the specific pitfalls and related keywords uncovered by our consortium were known, only a fraction could be found in the literature, indicating the novelty and added value of our work.

Several constraints regarding our literature search must be noted. First, the remarkably high number of search results inevitably includes duplicates of papers (for example, the same work in a conference paper and on arXiv) as well as results that are out of scope (for example, refs. 33,34); in the cited examples, for instance, this was due to a metric acronym (AUC) also being an acronym for another entity (a trinucleotide) in a different domain, or the word ‘sensitivity’ being used in its common, non-metric meaning. Moreover, common words used to describe pitfalls such as ‘problem’ or ‘issue’ are, by nature, present in many publications discussing any kind of research, rendering them unusable for a dedicated search. This could, in turn, account for missing publications that discuss pitfalls using these terms. Similarly, when searching for specific pitfalls, many of the returned results containing the appropriate keywords did not actually refer to metrics or algorithm validation, but to other parts of a model or biomedical problem (for example, the need for stratification is commonly discussed with regard to the design of clinical studies, but not with regard to their validation). Character limits in the Google Scholar search bar further complicate or prevent the use of comprehensive search strings. Finally, it is both possible and probable that our literature search did not retrieve all publications and non-peer-reviewed online resources that mention a particular pitfall, because even extensive search strings might not cover the particular words used for a pitfall description.

None of these observations, however, detracts from our hypothesis. In fact, all of the above observations reinforce our finding that, for any individual researcher, retrieving information on metrics of interest is difficult, if not impossible. In many cases, finding information on pitfalls seems feasible only if the specific pitfall and its related keywords are known exactly, which, of course, is usually not the case. Overall accessibility of such essential information, therefore, currently leaves much to be desired.

Compiling this information through a multistage Delphi process allowed us to leverage distributed knowledge from experts across different biomedical imaging domains and thus ensure that the resulting illustrated collection of metric pitfalls and limitations is both comprehensive and of maximum practical relevance. We maintained a continuing connection of our work to practical applications by sharing the first results of this process as a dynamic preprint12 with dedicated calls for feedback, as well as by crowdsourcing further suggestions on social media.

Although their severity and practical consequences might differ between applications, we found that the pitfalls generalize across different imaging modalities and application domains. By categorizing them solely according to their underlying sources, we were able to create an overarching taxonomy that goes beyond domain-specific concerns and thus enjoys broad applicability. Given the large number of identified pitfalls, our taxonomy crucially establishes structure in the topic. Moreover, by relating types of pitfalls to the respective metrics that they apply to and illustrating them, it enables researchers to gain a deeper, systemic understanding of the causes of metric failure.

Our complementary Metrics Reloaded recommendation framework, which guides researchers towards the selection of appropriate validation metrics for specific tasks and is introduced in a sister publication to this work5, shares the same principle of domain independence. Its recommendations are based on the creation of a ’problem fingerprint’ that abstracts from specific domain knowledge and, informed by the pitfalls discussed here, captures all properties relevant to metric selection for a specific biomedical problem. In this sister publication, we present recommendations to avoid the pitfalls described in this work. Notably, the finding that pitfalls generalize and can be categorized in a domain-independent manner opens up avenues for future expansion of our work to other fields of machine-learning-based imaging, such as general computer vision (see below), thus freeing it from its major constraint of exclusively focusing on biomedical problems.

It is worth mentioning that we examined only pitfalls related to the tasks of image-level classification, semantic segmentation, instance segmentation and object detection, which can all be considered classification tasks at different levels (image, object or pixel) and hence share similarities in their validation. Although including a wider range of biomedical problems not considered classification tasks, such as regression or registration, would have gone beyond the scope of the present work, we envision that future work will address this. Moreover, our work focused on pitfalls related to reference-based metrics. Exploration of pitfalls pertaining to non-reference-based metrics, such as metrics that assess speed, memory consumption or carbon footprint, could be a future direction. Finally, although we aspired to be as comprehensive as possible in our compilation, there could be more pitfalls to take into account that the consortium and the community have so far failed to recognize. Should this be the case, our dynamic Metrics Reloaded online platform, which is currently under development and will be updated continuously after release, will allow us to easily and transparently append missed pitfalls. This way, our work will remain a reliable point of access, reflecting the state of the art at any given moment in the future. In this context, we explicitly welcome feedback and further suggestions.

The expert consortium was primarily compiled in a way that covers the required expertise from various fields, but also consists of researchers of different countries, career stages, ages, roles and backgrounds (details can be found in the Supplementary Methods). The work mainly focused on biomedical applications. The pitfalls presented here are therefore of the highest relevance for biological and clinical use cases. Their clear generalization across different biomedical imaging domains, however, indicates broader generalizability to fields such as general computer vision. Future work could thus see a major expansion of our scope to AI validation well beyond biomedical research. Regardless of this possibility, we strongly believe that, by raising awareness of metric-related pitfalls, our work will kick off a necessary scientific debate. Specifically, we see its potential in inducing the scientific communities in other areas of AI research to follow suit and investigate pitfalls and common practices impairing progress in their specific domains.

In conclusion, our work presents a comprehensive and illustrated access point to information on validation metric properties and their pitfalls. We envision it to not only impact the quality of algorithm validation in biomedical imaging and ultimately catalyze faster translation into practice, but also to raise awareness on common issues and call into question flawed AI validation practice far beyond the boundaries of the field.