Introduction

During the last decade, the field of artificial intelligence (AI) and in particular machine learning (ML) has experienced unprecedented advances, largely due to breakthroughs in deep learning (DL)1,2,3,4,5 and increased computational power. Recently, the introduction of easy-to-use yet still extremely capable models such as GPT-46 and Stable Diffusion7 has further expanded the technology to an even broader audience. The large-scale handling and implementation of AI8 into fields such as manufacturing, agriculture and food, automated driving, smart cities and healthcare has since shifted the topic into the centre of attention of not just scholars and companies but the general public.

The introduction of novel and disruptive technologies is typically accompanied by an oscillating struggle between exploiting technological chances and mitigation of risks. ML is proving to have great potential to improve many aspects of our lives9,10,11. However, the race for implementation and utilisation is currently outpacing comprehension of the technology. The complex and black box character of AI applications has therefore largely steered the public conversation towards safety, security and privacy concerns12,13. A lack of confidence of the general population in the transparency of AI prevents its utilisation for society and economic growth. It can lead to a slowed adoption of innovations in crucial areas and discourage innovators from unlocking the technology’s full potential. Hence, the demand for regulation (e.g., EU AI Act14, US FDA considerations15) as well as the need for an improved understanding of AI is ever increasing. This is of particular importance in the field of healthcare due to its large impact on people’s lives. The amount of ML solutions in medicine (research tools and commercial products) is steadily on the rise, in particular in the fields of radiology and cardiology16,17. Despite breakthroughs up to human-level performance9,18,19,20, ML-backed medical products are mainly used as diagnosis assistance systems17 leaving the final decision to medical human professionals. In particular, medical ML solutions are successfully solving the task of image segmentation21,22,23. Due to the unknown consequences of using AI for medical decision-making, more stringent regulatory requirements are of high importance to accelerate the approval process of new AI products into medical practice. Decision-making needs to be supported by reliable health data to generate consistent evidence. One of the drivers for evidence-based medicine approaches was the introduction of scientific standards in clinical practice24. Since then, data integrity (defined by the ALCOA-principles or ALCOA+25) has become an essential requirement of several guidelines, such as good clinical practice26, good laboratory practice27 or good manufacturing practice28. In the pharmaceutical industry, data integrity plays a similarly important role as a requirement for drug trials. While data integrity focuses on maintaining the accuracy and consistency of a dataset over its entire life cycle, data quality is concerned with the fitness of data for use.

To improve confidence in AI utilisation in general, the focus is put on the development of so-called trustworthy AI, which aims at overcoming the black box character and developing a better understanding. Several approaches and definitions for trustworthy AI have been discussed and published over the past years by researchers29,30,31,32,33, public entities34,35, corporations36, and organisations37,38. Depending on the area of interest, trustworthiness may include (but is far from limited to) topics such as ethics; societal and environmental well-being; security, safety, and privacy; robustness, interpretability and explainability; providing appropriate documentation for transparency and accountability29,30,31,32,33,34,35,36,37,38. In particular, the approach to achieve transparency through documentation has gained much attention in the form of reporting guidelines and best practices. While some initiatives cover the entire ML system and development pipeline (e.g., MINIMAR39, FactSheets40), others are concerned with documentation surrounding the model (e.g., Model Cards41), and still others concentrate on the documentation of datasets (e.g., Datasheets42, STANDING Together43,44, Dataset Nutrition Label45, Data Cards46, Healthsheet47, Data Statements for NLP48). These standardisation efforts are a crucial first step for developing a better understanding of ML systems as a whole and of the interdependence of its components (e.g., data and algorithm). However, these approaches cover only limited information on the content of datasets and their suitability for use in ML. Additionally, we note that reporting guidelines and best practices concerning the documentation of datasets are mostly written from the perspective of providers and creators of datasets42,45, with some explicitly trying to reduce information asymmetry between supplier and consumer40.

One of the most critical parts of an AI is the quality of its training data since it has fundamental impact on the resulting system. It lays the foundation and inherently provides limitations for the AI application. If the data used for training a model is bad, the resulting AI will be bad as well (‘garbage in, garbage out’49). Neural networks are prone to learning biases from training data and amplifying them at test time50, giving rise to a much discussed aspect of AI behaviour: fairness51. Many remedies have been put forward to tackle discriminating and unfair algorithm behaviour52,53,54. Yet, one of the main causes of undesirable learned patterns lies in biased training data55,56. Thus, data quality plays a decisive role in the creation of trustworthy AI and assessing the quality of a dataset is of utmost importance to AI developers, as well as regulators and notified bodies.

The scientific investigation of data quality was initiated roughly 30 years ago. The term data quality was famously broken down into so-called data quality dimensions by Wang and Strong in 199657. These dimensions represent different characteristics of a dataset which together constitute the quality of the data. Throughout the years, general data quality frameworks have taken advantage of this approach and have produced refined lists of data quality dimensions for various fields of application and types of data. Naturally, this has produced different definitions and understandings. Within this systematic review, we transfer the existing research and knowledge about data quality to the topic of AI in medicine. In particular, we investigate the research question: Along which characteristics should data quality be evaluated when employing a dataset for trustworthy AI in medicine? The systematic comparison of previous studies on data quality combined with the perspective on modern ML enables us to develop a specialised data quality framework for medical training data: the METRIC-framework. It is intended for assessing the suitability of a fixed training dataset for a specific ML application, meaning that the model to be trained as well as the intended use case should drive the data quality evaluation. The METRIC-framework provides a comprehensive list of 15 awareness dimensions which developers of AI medical devices should be mindful of. Knowledge about the composition of medical training data with respect to the dimensions of the METRIC-framework should drastically improve comprehension of the behaviour of ML applications and lead to more trustworthy AI in medicine.

We note that data quality itself is a term used in different settings, with different meanings and varying scopes. For the purpose of this review, we focus on the actual content of a dataset instead of the surrounding technical infrastructure. We do so since the content is the part of a dataset which ML applications use to learn patterns and develop their characteristics. We thus exclude research on data quality considerations and frameworks within the topic of data governance and data management58,59. This concerns aspects such as data integration60, information quality management61, ETL processes in data warehouses62, or tools for data warehouses63,64 which do not affect the behavioural characteristics of AI systems. We also omit records discussing case studies of survey data quality65,66, as well as training strategies to cope with bad data67,68,69,70,71,72.

We further point out that the use of the term AI in current discussions is scientifically imprecise since discussions within the healthcare sector almost exclusively revolve around the implementation of ML approaches, in particular of DL approaches. Technically, the term AI spans a much wider range of technologies than just DL as part of the field of ML. Due to the complexity of DL applications and their proficiency in solving tasks deemed to require human intelligence, the terms are currently often used interchangeably in literature. We follow the same vocabulary here (e.g., ‘trustworthy AI’, ‘AI in medicine’) but stress the limitation of our results to ML approaches.

Results

In order to answer the research question ‘Along which characteristics should data quality be evaluated when employing a dataset for trustworthy AI in medicine?’, we conducted an unregistered systematic review following the PRISMA guidelines73. Our predetermined search string contains variations of the following terms: (i) data quality, (ii) framework or dimensions and (iii) machine learning (see Methods for more details and the full search string). The initial search of the databases Web of Science, PubMed and ACM Digital Library was performed on the 12th of April 2024 and yielded 4633 unique results. After title and abstract screening, adding references of the remaining records (‘snowballing’) and full text assessment, we find 120 records that match our eligibility criteria (see Methods). This represents the literature corpus that serves as a foundation for answering the research question. The full workflow is illustrated in Fig. 1.

Fig. 1: PRISMA flow diagram.
figure 1

The flow diagram shows the number of records identified, included and excluded at the different stages of the systematic review. The eligibility criteria for inclusion and exclusion are presented in the bottom right hand side. From a total of 5408 identified studies (4633 from database search, 775 from snowballing), the resulting literature corpus on data quality for trustworthy AI in medicine includes 120 studies.

In Fig. 2, the papers from our literature corpus are displayed according to their publication year57,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192. The overarching topics contained in the corpus naturally divide the papers into three categories: general data (35 entries), big data (8 entries) and ML data (77 entries). This reflects the historic development of the research field of data quality during the last 30 years.

Fig. 2: Studies included in the literature corpus sorted by publication date.
figure 2

The 120 studies are divided into the three categories general data (35), big data (8) and ML data (77), which represent major changes in the perception of data quality. The studies' affiliation to either non-life science (76) or life science (44) related topics is indicated as well.

General data quality

The field first shifted into focus with digital and automatically mass-generated data during the 1980s and 1990s causing a need for quality evaluation and control on a broad scale. While during the first 10 years landmark papers57,74 built the foundation for the field, the last 20 years have seen general data quality frameworks published more frequently75,76,77,78,79,80,81,82,83,84. The literature corpus additionally contains general data quality frameworks with high specificity to medical applications85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105 while frameworks with high specificity to non-medical topics193,194 were excluded.

The early data quality research in the 1980s and 1990s uncovered the lack of objective measures to asses data quality, which led to the introduction of task-dependent dimensions and the establishment of a data quality framework from the perspective of the data consumer57. Another fundamental challenge in the data quality field is the efficient data storage while maintaining quality. This was first investigated with the introduction of a data quality framework from the perspective of the data handler74. Both approaches to data quality proved to be useful and were unified in one framework75. In the following years, the frameworks were further extended76,77, equipped with measures78,79 and refined80,81. Moreover, it became clear that specialised fields such as the medical domain require adapted frameworks.

With the overarching question of how to improve patient care and the rise of electronic health records (EHR) in the 1990s, the need for high data quality in the medical sector increased. Accordingly, one of the first data quality frameworks in healthcare was implemented by the Canadian Institute for Health Information85. The first comprehensive data quality framework specifically for EHR data in the literature corpus was established by conducting a survey of quality challenges in EHR86. It considers, among other characteristics, accuracy, completeness and particularly timeliness. However, accuracy is hard to quantify in the medical context as even the diagnosis of experienced practitioners sometimes do not coincide. Accordingly, the notion of concordance of differing data sources was introduced87. Yet, the data quality frameworks for EHR could only be transferred to other types of medical data to a certain extent. Thus, data quality frameworks for particular data types such as immunisation data, public health data, multi-centre healthcare data or similar were put forward88,89,90,91,92,93,94,95. The various frameworks still suffered from inconsistent terminology and attempts were made to harmonise the definitions and assessment96,97,98,99,100,101,102,103. Particularly, Kahn et al.97 proposed a framework with exact definitions and recently, Declerck et al.103 published a ‘review of reviews’ portraying the different terminologies and attempting to map them to a reference. While these developments have advanced the understanding of data quality in the context of medical applications, frameworks for EHR frequently focus on the data quality of individual patients86,87, neglecting data quality aspects for the overall population. In particular, representativeness is often not a factor86,87 while it is a crucial property for secondary use of data in clinical studies88 or when reusing medical data as training data for ML applications.

Big data quality

As the amount of data from varying sources grew, conventional databases reached their capacity and the field of big data emerged. Big data is generally concerned with handling huge unstructured data streams that need to be processed at a rapid pace, emphasising the need for extended data quality frameworks. This development is reflected by a small wave of papers published between 2015 and 2020106,107,108,109,110,111,112,113. For example, the weaker structure of the data encouraged the use of data quality frameworks that include the data schema as a data quality dimension106,107. Further, the increasing amount of data requires the computational efficiency of the surrounding database infrastructure to be a part of big data quality frameworks108,109,110. Computational efficiency is also a limiting factor when ML methods are applied to big data. While it is generally assumed that more data leads to better results, this has to be balanced with computational capabilities. Hence, a data quality framework was developed that bridges the gap between ML and big data111. We note that the ‘4 V’s’ (volume, velocity, veracity and variety) of big data195 implicitly suggest a framework for big data quality. However, the ‘4 V’s’ are in fact big data traits which can have an effect on data quality but are not considered data quality dimensions196. They therefore do not contribute to answering our research question and are not further discussed. This might change in the future when data from wearables or remote patient monitoring sensors become available for health management.

ML data quality

The performance and behaviour of DL applications heavily depends on the quality of the data used during training as this is the foundation from which patterns are learned. The records of the literature corpus which discuss or empirically evaluate the effect of data quality on DL deal with a wide variety of data types and models. Many records investigate tabular data while utilising both simpler and more advanced architectures114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132. Recently, studies increasingly look at data quality in the context of sequential data (often time series)133,134,135,136,137,138,139, images119,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182, natural language120,121,145,182,183,184,185,186 or other complex types of data122,151,187,188,189,190,191,192. Some papers try to estimate data quality effects on ML models by using synthetic data122,123,189.

Contrary to the big data and general data quality literature from our corpus, the DL papers focus on the evaluation of one or very few specific data quality dimensions without (yet) considering broader theoretical data quality frameworks. Dimensions that are predominantly investigated are those which can easily be manipulated and lend themselves to be applicable to a wide range of datasets irrespective of specific tasks. The most prominent dimension is amount of data123,124,125,126,129,146,147,148,149,150,151,152,153,154,155,156,157,158,159,183,184,185,186,187,188,189 which is empirically shown to benefit performance, albeit in a saturating manner. Another dominant topic is completeness to which the ML community almost exclusively refers to as missing data119,125,126,127,128,133,134,135,182. The effect that data errors have on the DL application is also frequently investigated. Specifically, this is done by separately looking at perturbed features (inputs of a NN)128,129,130,131,132,133,134,136,159,160,161,162,163,164,165,166,167,168,182 and noisy targets (predictions to be generated by a NN)131,132,133,154,155,156,157,158,159,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183. Many ML settings are classification tasks which is reflected by the corpus often addressing label noise157,158,159,169,182,183. One record highlights the hefty weight that physicians’ annotations carry in medicine158. In order to evaluate the effect of data quality (features or targets) on ML applications, the training data is commonly manipulated. On the feature (input) side, e.g., images are distorted by adjusting contrast whereas time series sequences are disturbed by swapping elements. On the target side, e.g., correct labels are randomly replaced by false ones.

When it comes to the concrete behaviour change of the DL algorithm, most of the DL papers in the literature corpus investigate the robustness of a model, i.e. the stable behaviour of a model when facing erroneous or a limited amount of inputs. Only few records investigate generalisability119,144,145 or distribution shift139,192, a model’s capability of coping with new, unseen data. Another noteworthy exception is Ovadia et al.145 who additionally study predictive uncertainty.

Overall, theoretical data quality frameworks enjoy little attention by the ML community due to the novelty of the ML research field. Papers often focus on few specific data quality dimensions and tasks. Each task comes with its specific data type, necessitating different approaches to manipulate the data and measure these effects. The research dealing with the impact of manipulated data is heavily skewed towards robust behaviour in the sense of predictive performance. Other possibly affected aspects such as explainability or fairness are underrepresented and to some degree neglected which is a potential shortcoming for safety-critical applications such as medical diagnosis predictions.

METRIC-framework for medical training data

The literature corpus has shown that while similar ideas exist for the assessment of data quality across fields and applications, the idiosyncrasy of each field or application can only be captured by specialised frameworks rather than by a one-model-fits-all framework. The evaluation of data quality plays a particularly important role in the field of ML due to the fact that its behaviour is not only dependent on the algorithm choice but also strongly depends on its training data. At the same time, ML is implemented in various fields, each processing and requiring different types and qualities of data. We therefore propose a specialised data quality framework for evaluating the quality of medical training data: the METRIC-framework (Fig. 3), which is based on our literature corpus57,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192. We note that the METRIC-framework is specifically not designed to assess the data quality of a dataset in vacuum. Rather, it was conceived for the situation where the purpose of the desired medical AI is known. Thus, the intention of the METRIC-framework is to assess the appropriateness of a dataset with respect to a specific use case. From now on, we refer to data quality for training (or test) data of medical ML applications only. We point out that our framework does not yet include a guideline on the assessment or measurement of data qualities but rather presents a set of awareness dimensions which play a central role in the evaluation of data quality.

Fig. 3: The METRIC-framework.
figure 3

This specialised framework for evaluating data quality of the content of medical training data includes a comprehensive set of awareness dimensions. The inner circle divides data quality into five clusters. These clusters contain a total of 15 data quality dimensions, which are shown on the outer circle. The subdimensions presented in grey on the border of the figure contribute to the superordinate dimension. Due to the shape of the graphic, we refer to it as wheel of data quality.

While examining the literature corpus, we found that terms describing data quality appear under varying definitions, or often with no definition at all. While standardisation efforts exist for the terminology in the context of evaluating data quality83,197,198, they are often not employed or did not exist yet for older papers making comparisons difficult. Therefore as a first step, we extracted all mentioned data quality dimensions from the literature corpus together with their definitions (if present) and added them to a list. This yielded 461 different terms with 991 mentions across all papers. Second, we hierarchically clustered the terms with respect to their intended meaning and according to their dependencies into clusters, dimensions and subdimensions (see Methods for more details on data extraction). We thus obtained 38 relevant dimensions and subdimensions which are displayed on the outer circle of Fig. 3. In Tables 16, we provide a complete list of definitions for all 38 relevant dimensions and subdimensions, as well as their hierarchy, practical examples and references with respect to the literature corpus. We adopted definitions from a recent data quality glossary197 if they existed there and met our understanding of the dimension in the given context of medical training data. If necessary, we included definitions given by Wang et al.57 in a second iteration. If none of these two sources suggested an appropriate definition, we captured the meaning of the desired term on the basis of the literature corpus and thus determined its definition in the context of medical training data.

Table 1 Measurement Process96,105 cluster: definitions, examples and references
Table 2 Timeliness cluster: definitions, examples and references
Table 3 Representativeness74,85,89,90,102,103,104,105,106,121 cluster: definitions, examples and references
Table 4 Informativeness80 cluster: definitions, examples and references
Table 5 Consistency76,82,83,84,89,90,98,101,103,104,105,106,120 cluster: definitions, examples and references
Table 6 Data Management cluster: definitions and references

The METRIC-framework encompasses three levels of details: clusters which pool similar dimensions; dimensions which are individual characteristics of data quality; and subdimensions which split larger dimensions into more detailed attributes (compare Fig. 3 from inside to outside). Besides the terms contained in the METRIC-framework, we found several frequently mentioned dataset properties which we, for our purpose, want to separate from the METRIC-framework. We summarise these additional properties under a separate cluster called data management (Fig. 4). The attributes included in this cluster ensure that a dataset is well-documented, legally and effectively usable. In particular, it includes the properties documentation, security and privacy, as well as the well-established FAIR-Principles199. Appropriate documentation of datasets is the topic of multiple initiatives42,43,44,45,46,47 that give guidance for the data creator and handler. The METRIC-framework on the other hand is targeted towards AI developers. It evaluates the suitability of the content of the data for a specific ML task, which is greatly facilitated by appropriate documentation but does not depend on it. Similarly, the FAIR-principles199, requiring data to be findable, accessible, interoperable and reusable, are vital for evaluating datasets for general purpose but are not included in the METRIC-framework since the question of fit for a specific purpose can only be asked when a dataset is already successfully obtained. Security is another important aspect of data management: Who can access and edit the data? Can it be manipulated? Again, such questions concern the handling of the data, not the evaluation of its content. Finally, privacy (data privacy and patient privacy) is a delicate and heavily discussed topic in the context of healthcare. However, we separate these issues from the METRIC-framework since they concern data collection, creation and handling. We note that aspects such as anonymisation or pseudonymisation may impact the quality of the content of a dataset by, e.g., removing information167. However, the METRIC-framework is designed to evaluate the resulting dataset with respect to its usefulness for a specific task, not the quality of the modifications. Hence, while these properties play a central role in the creation, handling, management and obtainment of data, the METRIC-framework is targeted at the content of a dataset since that is the part the ML algorithm learns from. Therefore, we see the data management cluster as a prerequisite for data quality assessment by the METRIC-framework which itself divides the concept of data quality for the content of a dataset into five clusters: measurement process, timeliness, representativeness, informativeness, consistency. A summary of the characteristics and key aspects of all five clusters is given in Table 7.

Fig. 4: METRIC-Framework in relation to data management.
figure 4

The cluster data management is concerned with the effective usage of the dataset. It includes basic requirements for the dataset but does not address data quality issues regarding its content. Therefore, it can be seen as a prerequisite for assessment using the METRIC-framework. Figuratively speaking, the data management cluster serves as a stable foundation for the wheel of data quality.

Table 7 Key characteristics of each of the five clusters of the METRIC-framework

Measurement process

The cluster measurement process captures factors that influence uncertainty during the data acquisition process. Two of the dimensions within this cluster differentiate between technical errors originating from devices during measurement (see device error) and errors induced by humans during, e.g., data handling, feature selection or data labelling (see human-induced error). For the dimension device error, we distinguish between the subdimension accuracy, the systematic deviation from the ground truth (also called bias), and the subdimension precision, the variance of the data around a mean value (also called noise). In practice, a ground truth for medical data is most often not attainable, making accuracy evaluation impossible. In that case, the level and structure of noise in the training data should be compared to the expected noise in the data after AI deployment. If the training data only contains low noise but the AI is utilised in clinical practice on data with much higher noise levels, the performance of the AI application might not be sufficient since the model did not face suitable error characteristics during training. Therefore, lower noise data is not necessarily better and adding noise to the training data might in some instances even improve performance200,201,202. The errors belonging to the dimension human-induced errors are of a fundamentally different nature and need to be treated accordingly. This type of error includes human carelessness and outliers in the dataset due to (unintentional) human mistakes. The final subdimension, noisy labels, is one of the most relevant topics in current ML research157,159,169,182. Since in the medical domain, supervised learning paradigms are prevalent, proper feature selection and reliable labelling are indispensable. However, human decision making can be highly irrational and subjective, especially in the medical context203,204,205, representing one of various sources of labelling noise206. Among expert annotators there is often considerable variability206,207. Even in the most common (non-medical) datasets of ML (e.g., MNIST208, CIFAR-100209, Fashion-MNIST210) there is a significant percentage of wrong labels211,212. In contrast to the precision of instruments, noise in human judgements is demanding to be assessed through so-called noise audits to identify different factors, like pattern noise and occasion noise in the medical decision process213. Such intra- and inter-observer variability has always been a highly important topic in many medical disciplines, e.g., in radiology where guidelines, training and consensus reading approaches are used to reduce noise214.

Another issue that frequently occurs in the data acquisition process and which plays an important role in ML is the absence of data values with unknown reason. We follow the ML vocabulary by capturing this quality issue with the dimension completeness, while noting that outside of ML contexts, this term is commonly used to describe representativeness, coverage or variety in other contexts. Most prominently, Wang et al.57 define completeness as ‘breadth, depth, and scope of information’. This definition has been picked up by other researchers, as well100,106,126. In ML, however, completeness is usually measured by the ratio of missing to total values. Apart from the mostly quantitative dimensions within the cluster, the dimension source credibility is concerned with mostly qualitative characteristics. On the one hand, it includes the question whether or not the measured data can be trusted based on the expertise of people involved in data measurement, processing and handling. On the other hand, the subdimension traceability evaluates whether changes from original data to its current state are documented. Being aware of modifications such as the exclusion of outliers, automated image processing in medical imaging or data normalisation and their utilised algorithms are necessary for understanding the composition of the data. Finally, the subdimension data poisoning considers whether the data was intentionally corrupted (e.g., adversarial attacks) to cause distorted outcomes. The entire cluster measurement process is crucial for data quality evaluation in the medical field since errors may propagate through the ML model and lead to false diagnosis or treatment of patients.

We note that special consideration has to be given to the field of medical imaging within the measurement process cluster due to the fact that many imaging devices are not classical measurement devices. For instance, in current radiological practice, decisions are still based mainly on visual inspection of images and rating of diseases and therapy effects are often done in qualitative terms such as ‘enlarged’, ‘smaller’ or ‘enhanced’. This places a lot of importance on the qualitative subdimensions source credibility and expertise, with respect to quality assessment in such use cases. However, over the last two decades significant efforts have been made to establish quantitative imaging biomarkers to transform scanners more into measurement devices to quantify biophysical parameters, like flow, perfusion, diffusion or elasticity. Such quantitative imaging approaches reduce the operator dependency and enable more quantitative evaluation in the dimension device error. Worldwide alliances such as Quantitative Imaging Biomarkers Alliance (QIBA) launched in 2007 by the Radiology Society North America215 and now replaced by the Quantitative Imaging Committee (QUIC), the Quantitative Imaging Network (QIN) of the National Cancer Institute in the US216 or the European Imaging Biomarkers Alliance (EIBALL) by European Society of Radiology217 are committed to make this transformation.

Timeliness

Since medical knowledge and understanding is subject to constant development, it is important to investigate the cluster timeliness which indicates whether the point in time at which the dataset is used in relation to the point in time at which it was created and updated is appropriate for the task at hand. Indications for diagnoses based on medical data may have changed since a dataset was created and labelled, and changes in coding systems (such as the transition from ICD-9 to ICD-10 or ICD-9-CM to ICD-10-CM) may affect mortality and injury statistics218,219. The age of the data dictates whether such investigations are necessary. In such cases, the labels or standards utilised would then have to be appropriately updated to satisfy the subdimension currency. Furthermore, knowledge about the subdimension age might provide information about precision and accuracy of the measurement as it gives insight into the technology used during data acquisition.

Representativeness

Another central cluster, especially for medical applications, is representativeness. Its dimensions are concerned with the extent to which the dataset represents the targeted population (such as patients) for which the application is intended. Whether the population of the dataset covers a sufficient range in terms of age, sex, race or other background information is the topic of the subdimension variety in demographics contained within the dimension variety. This dimension also contains the subdimension variety of data sources concerned with questions such as: Does the data originate from a single site? Were the measurements done with devices from the same or different manufacturers? Appropriately investigating such questions can provide a strong indication for the applicability and generalisability of the ML application in different environments220,221,222,223. The dimension depth of data is one of the main topics of the ML papers in our literature corpus. Apart from the subdimension dataset size already discussed in the previous section, this dimension also includes the subdimension granularity, which considers whether the level of detail (e.g., the resolution of image data) is sufficient for the application, as well as the subdimension coverage, which investigates whether sub-populations (e.g., specific age groups) are still diverse by themselves (e.g., still contain all possible diagnoses in case of classification applications). Finally, the highly-discussed dimension target class balance pays tribute to the technical requirements of ML140,141,144,150,159. An algorithm must learn patterns for specific classes from the training data. However, strong imbalances in the class ratio could be caused by, e.g., rare diseases. In order to still be able to properly learn corresponding patterns it may be helpful to deliberately overrepresent rare classes in the dataset instead of matching their real world distribution224,225.

Informativeness

The cluster informativeness considers the connection between the data and the information it provides and whether the data does so in a clear, compact and beneficial way. First of all, the understandability of the data considers whether the information of the data is easily comprehended. Second, the dimension redundancy investigates whether such information is concisely communicated (see subdimension conciseness) or whether redundant information is present such as duplicate records (see subdimension uniqueness). The dimension informative missingness answers the question whether the patterns of missing values provide additional information. Che et al.135 find an informative pattern in the case of the MIMIC-III critical care dataset226 which displays a correlation between missing rates of variables and ICD9-diagnosis labels. Missingness patterns are categorised by the literature into either not missing at random (NMAR), missing at random (MAR) or missing completely at random (MCAR)227,228. Finally, feature importance is concerned with the overall relevance of the features for the task at hand and moreover with the value each feature provides for the performance of a ML application since the quantity of data has to be balanced with computational capability. Valuable features might in many cases be as important as dataset size229, which is a frequently discussed topic in the data-centric AI community230.

Consistency

The dimensions belonging to the cluster consistency illuminate the topic of consistent data presentation from three perspectives. Rule-based consistency summarises subdimensions concerned with format (syntactic consistency), which includes the fundamental and well-discussed topic of data schema106, and the conformity to standards and laws (compliance). These subdimensions ensure that the dataset is easily processable on the one hand and comparable and legally correct on the other. Logical consistency evaluates whether or not the content of the dataset is free of contradictions, both within the dataset (e.g., a patient without kidneys that is diagnosed with kidney stones) and in relationship to real world knowledge (e.g., a 200-year-old patient). The last dimension of the cluster, distribution consistency, concerns the distributions and their statistical properties of relevant subsets of the total dataset. While the subdimension homogeneity evaluates whether subsets have similar or different statistical properties at the same point in time (e.g., can data from different hospitals be identified by statistics?), the subdimension distribution drift deals with varying distributions at different time points. This subdimension can be neglected if the dataset is not continuously changing over time, but distribution drift is sometimes unconsciously discarded due to a lack of model surveillance. Therefore, it is a prominent research topic145 and the unconsciousness furthermore underlines the importance of distribution drift for medical applications93.

Discussion

The METRIC-framework (Fig. 3) represents a comprehensive system of data quality dimensions for evaluating the content of medical training data with respect to an intended ML task. We stress again that these dimensions should for now be regarded as awareness dimensions. They provide a guideline along which developers should familiarise themselves with their data. Knowledge about these characteristics is helpful for recognising the reason for the behaviour of an AI system. Understanding this connection enables developers to improve data acquisition and selection which may help in reducing biases, increasing robustness, facilitating interpretability and thus has the potential to drastically improve the AI’s trustworthiness.

With training data being the basis for almost all medical AI applications, the assessment of its quality gains more and more attention. However, we note that providing a division of the term data quality into data quality dimensions is only the first step on the way to overall data quality assessment. The next step will be to equip each data quality dimension with quantitative or qualitative measures to describe their state. The result of this measure then has to be evaluated with respect to the question: Is the state of the dimension appropriate for the desired AI algorithm and its application? These three steps (choosing a measure, obtaining a result, evaluating its appropriateness for the desired task) can be applied to each dimension and subdimension. Appropriately combining the individual outcomes can potentially serve as a basis for a measure of the overall data quality in future work.

So far the dimensions in the METRIC-framework are not ranked in any way. However, it is clear that some of them are more important than others. Therefore, some dimensions deserve more attention in the assessment process or might even be a criterion for exclusion of a dataset for a certain task. These dimensions should be among the first to be assessed in practice. On the other hand, some dimensions are much more difficult to measure and evaluate than others. This can be due to their qualitative nature, the complexity of the statistical measure, the degree of use-case dependence or the expert knowledge that is needed for the assessment, to name a few. These considerations are of central interest for the development of a complete data quality assessment and examination process.

In Fig. 5, we provide insights that should be taken into consideration when practically assessing data quality. We classify each of the 15 awareness dimensions along two different properties. On the one hand, we estimate whether a dimension requires mostly quantitative or qualitative measures. We observe that about half of the dimensions require mostly quantitative measures while a fifth necessitate more manual inspection by qualitative measures (see left-hand side of Fig. 5). Being able to choose quantitative measures typically implies more objectivity and enables automation, two desirable properties for quality assessment. Dimensions categorised as mostly qualitatively measurable or requiring a mixture of quantitative and qualitative input will typically require specific domain knowledge from the medical field. Such domain knowledge can be difficult to obtain and expensive.

Fig. 5: Categorisation of the METRIC-framework.
figure 5

Categorisation of dimensions along the properties quantitative vs. qualitative measure (left) and use case dependence for evaluating data quality (right). The affiliation to a category is colour-coded. The colour scale is presented in the inner circle.

On the other hand, we consider whether the state of a dimension or the evaluation of its appropriateness level is use case dependent (see right-hand side of Fig. 5). This is of interest to developers as use case dependent dimensions require not only additional knowledge, work and time during quality assessment but also during quality improvement of data. Our findings suggest a division of the wheel of data quality after categorising all 15 dimensions. The clusters representativeness and timeliness as well as the dimensions device error and feature importance belong to the group of use case dependent dimensions. Whether a dataset is representative of the targeted population can only be evaluated with knowledge of the use case. Similarly, the importance of features changes between applications. Whether the age and currency of the data (see dimension timeliness) are appropriate can also differ depending on the task. For instance, the coding standard the data should conform to depends on the application. The newest standards are not necessarily the best if in practice these standards are not implemented (see section on Timeliness). Similarly, reducing noise levels in the data is not necessarily better for all applications. It rather depends on the expected noise levels of the application (see section on Measurement process for more detail).

For an overall assessment of the quality of the dataset, we estimate that on average the dimensions of the representativeness cluster together with the dimensions feature importance, distribution consistency and human-induced error are crucial factors. Ignoring a single one of these dimensions potentially has proportionally larger effects on the AI application than other dimensions. This might also depend on the type of ML problem. Actual quantification of the effect of data quality dimensions on ML applications is part of ongoing and future research. Nevertheless, we for now recommend prioritising these six dimensions if it is possible to dedicate time to evaluating or improving a dataset. With the exception of the dimension feature importance, all of the crucial dimensions are simultaneously measured mostly quantitatively making them primary candidates for software tools designed for improving the quality of datasets.

The importance of data quality for medical ML products is undisputed and gaining more and more attention with on-going discussions about fairness and trustworthiness. Parts of future regulation and certification guidelines will not only include ML algorithms but likely also require evaluating the quality of datasets used for their training and testing. Such inclusion of data quality in regulation requires systematic assessment of medical datasets. The METRIC-framework may serve as a base for such a systematic assessment of training datasets, for establishing reference datasets, and for designing test datasets during the approval of medical ML products. This has the potential to accelerate the process of bringing new ML products into medical practice.

Methods

Literature review

In order to answer the research question ‘Along which characteristics should data quality be evaluated when employing a dataset for trustworthy AI in medicine?’, we conducted a systematic review following the PRISMA guidelines73. The goal of such a review is to objectively collect the knowledge of a chosen research area by summarising, condensing and expanding the ideas to further its progress. PRISMA reviews commonly follow four main steps: (i) Searching suitable databases with carefully formulated search strings and extracting matching papers; (ii) screening titles and abstracts to include or exclude papers based on predetermined criteria; (iii) extending the literature list by screening titles and abstracts of all referenced papers from the included papers (called ‘snowballing’); (iv) screening the full text of all still included papers with respect to the eligibility criteria to build the final literature corpus.

Search strategy

Our research question aims at combining the knowledge from the field of general data quality frameworks with insights about the effects that the quality of training data has on ML applications in medicine. This should ultimately lead to a novel framework for data quality in the context of medical training data. Therefore, we built a search string that on the one hand targeted papers about data quality frameworks by combining variations of ‘data quality’ with variations of the terms ‘framework’ and ‘dimensions’. On the other hand, we attempted to collect papers about the connection between the quality of training data and the behaviour of a DL application by again combining variations of the word ‘data quality’ but this time with variations of ‘machine learning’, including ‘artificial intelligence’ and ‘deep learning’ (see Search query). We then performed the database search on one general and two thematically suitable online databases: Web of Science, PubMed and ACM Digital Library. We are aware that the choice of databases skews, to some degree, all interpretations which, to some extent, is mitigated by snowballing. All retrieved results were concatenated and duplicates removed, yielding 4633 records.

Search query

The following search string in pseudo-code (visualised in Fig. 6) was executed on the 12th of April 2024 on Web of Science, PubMed and ACM Digital Library:

Fig. 6: Search string visualisation.
figure 6

Visualisation of the keywords and logical connections that formed the search string. Each box can be translated to parantheses in the search string. Keywords inside each box are connected with each other by a logical OR.

(("data quality" OR "data-quality"   OR "data qualities" OR "quality of data"   OR "quality of the data" OR "qualities of data"   OR "qualities of the data" OR "quality of training data"   OR "quality of the training data" OR "quality of ML data"   OR "data bias" OR "data biases"   OR "bias in the data" OR "biases in the data"   OR "data problem" OR "data problems"   OR "problem in the data" OR "problem with the data"   OR "problems with the data" OR "data error"   OR "data errors" OR "error in the data"  )  AND  ("dimension" OR "dimensions"   OR "AI" OR "artificial intelligence"   OR "ML" OR "machine learning"   OR "deep learning"   OR "neural network" OR "neural networks"  ) ) OR ("data quality framework" OR "data quality frameworks"  OR "framework of data quality" OR "framework for data quality" )

The chosen databases supported exact (instead of fuzzy) searches, expressed by quotation marks around keywords. The search was applied to the title and abstract fields of all records of the databases.

Eligibility criteria

In Table 8, our chosen eligibility criteria that were applied to the various screening steps are listed. Papers were included if they either provided broad-scale data quality frameworks with general purpose or with specificity to a medical application, or if they discussed or quantified the effects of at least one training data quality dimension on DL behaviour. In contrast, papers were excluded if they (i) either discussed frameworks with specificity to non-medical fields or (ii) only considered single or few data quality dimensions without reference to ML or (iii) focused on the quality of data management and surveys. No limits were imposed with respect to publication date or publisher source (i.e. peer-reviewed or not), while non-English records and inaccessible records were omitted.

Table 8 Eligibility criteria applied to the screening and full-text assessment processes

We note that in order to be as precise and logical as possible during the practical screening and eligibility checks, we implemented the following eligibility criteria: (I1) Inclusion: No exclusion criteria apply; (I2) Inclusion: Study measures effect of data on DL; (E1) Exclusion: Focus of study is not data quality; (E2) Exclusion: Focus of study is not on general theoretical data quality framework; (E3) Exclusion: Study has high specificity to non-medical field; (E4): Exclusion: Focus of study is quality of data management or surveys. The logic we applied during screening and eligibility check is: If any exclusion criteria applies, the study is excluded, unless an inclusion criteria applies at the same time.

Literature review process

Titles and abstracts from the records of the database search were screened with respect to the eligibility criteria. This was done by two authors independently to mitigate biases. In case of disagreement, consensus was achieved by discussion. If necessary, a third author was consulted to arrive at the final decision. This step reduced the number of records to 165. The snowballing step expands the scope of the literature corpus to make it more independent of the initially chosen databases and search string which is important to reduce bias. For the process of snowballing, we considered all references from the so far 165 included papers which resulted in adding 775 records to the literature list. Analogously, title and abstract screening was performed on these new entries with the same criteria and workflow as before, leaving 135 additional papers from snowballing. As a final step, all 300 remaining papers were evaluated on the full text with respect to the eligibility criteria. In the end, 120 entries passed all screening steps. For each retrieved record, the decision whether to include or exclude was documented along with the corresponding eligibility criterion. Each record which had passed the screening was eligible for extracting data quality terms.

Data extraction strategy

In order to introduce a comprehensive data quality framework, the 120 selected records were each read by two authors and all terms that were deemed relevant to describe data quality were extracted. See Table 9 for details on extracted vocabulary from each record. We discarded terms if (i) their scope is limited to a specialised data source and not transferable to a general framework, (ii) the term refers to the quality of database infrastructure or (iii) no definition was given and it was impossible to grasp the intended meaning from the context. The accepted terms were copied into an Excel sheet, which served as a starting template for the METRIC-framework. We clustered related concepts into groups according to the terms’ definition or intended meaning. From these small and detailed groups we formed the so-called subdimensions, ensuring that each subdimension is mentioned by at least three references in the literature corpus, otherwise the level of detail was deemed too great leading to further grouping.

Table 9 List of all publications in the literature corpus with the originally mentioned data quality dimensions mapped to the corresponding dimension or subdimension of the METRIC-framework

It seems that with 461 extracted terms, we are beyond a saturation point of finding new data quality dimensions. From a certain point on, more synonyms do not uncover new concepts. From a bias assessment point of view, it is possible that the literature that investigates effects of data quality on ML could be skewed towards investigating and reporting dimensions with bigger effects. The risk of missing out on vocabulary due to this is mitigated by the inclusion of broad theoretical frameworks in our literature corpus.

Thorough discussion of all authors about underlying concepts and definitions of the subdimensions resulted in hierarchically grouping these into dimensions and the dimensions into clusters. In parallel to this grouping, all authors reached consensus on definitions for dimensions and subdimensions of the METRIC-framework. The definitions were adopted from a recent data quality glossary197 if they existed there and met our understanding of the vocabulary in the given context of medical training data. If necessary, we included definitions given by Wang et al.57 in a second iteration. If none of these two sources suggested an appropriate definition, we captured the meaning of the desired term on the basis of the literature corpus and thus determined its definition in the context of medical training data (see Tables 16).

Reporting summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.