Abstract
Fairness in machine learning (ML) emerges as a critical concern as AI systems increasingly influence diverse aspects of society, from healthcare decisions to legal judgments. Many studies show evidence of unfair ML outcomes. However, the current body of literature lacks a statistically validated approach that can evaluate the fairness of a deployed ML algorithm against a dataset. A novel evaluation approach is introduced in this research based on k-fold cross-validation and statistical t-tests to assess the fairness of ML algorithms. This approach was exercised across five benchmark datasets using six classical ML algorithms. Considering four fair ML definitions guided by the current literature, our analysis showed that the same dataset generates a fair outcome for one ML algorithm but an unfair result for another. Such an observation reveals complex, context-dependent fairness issues in ML, complicated further by the varied operational mechanisms of the underlying ML models. Our proposed approach enables researchers to check whether deploying any ML algorithms against a protected attribute within datasets is fair. We also discuss the broader implications of the proposed approach, highlighting a notable variability in its fairness outcomes. Our discussion underscores the need for adaptable fairness definitions and the exploration of methods to enhance the fairness of ensemble approaches, aiming to advance fair ML practices and ensure equitable AI deployment across societal sectors.
Similar content being viewed by others
Introduction
With the continuous advancement of computational efficiency, artificial intelligence (AI) systems and their applications in a wide range of applications have gained significant acceptance and importance in our everyday lives1,2,3,4. These sophisticated AI-based systems are frequently employed in sensitive environments, contributing to making substantial and life-changing decisions. Hence, ensuring that these systems do not show any preferential or prejudicial behaviour towards certain groups or populations is crucial. Otherwise, they will be vulnerable to making biased or unfair decisions. Researchers are becoming more aware of the bias inherent in such AI-based systems and the resulting unfair decisions from their real-world commercial applications in various contexts, including hiring5,6 and pretrial detention and release decisions7. Therefore, knowing whether a machine learning (ML) algorithm could generate unfair or biased results before using those results for decision-making is critical. This study aims to develop an approach to evaluate the fairness of the deployed ML algorithm for a given dataset. Although AI encompasses a spectrum of technologies, from rule-based systems to ML algorithms, our focus in this article narrows to ML, a subset of AI where algorithms improve performance through data exposure.
Bias in AI-based systems can arise from various sources and manifest in different forms, each affecting machine learning fairness. Measurement or reporting bias, for example, may occur when data like facial recognition technologies are trained on non-representative datasets, leading to higher misidentification rates in underrepresented groups. Representation bias involves data that does not reflect all demographics, such as gender bias in job recommendation systems influenced by historical hiring data8. Sampling bias, such as training creditworthiness predictors solely on urban populations, leads to inaccurate assessments of rural individuals8. Aggregation bias might obscure specific needs within groups, as seen when medical data across ages fail to address elderly-specific health issues8. Linking bias introduces errors by incorporating irrelevant data, such as social media activity in credit scoring, while omitted variable bias involves missing crucial variables, like informal education paths in job screening processes, leading to unfair outcomes8.
Recent efforts to address biases in ML have led to many methodologies and frameworks to enhance reproducibility and fairness in research outcomes. Notable among these are initiatives such as the reproducibility challenge hosted by Princeton University9 and noteworthy contributions to research by others [e.g.,10,11]. Further enriching this landscape, foundational reviews12,13 delve into the challenges and solutions surrounding fairness in ML. Zhang and Sun14 explore innovations in unsupervised learning methods for time series clustering. At the same time, advancements in data processing within health informatics and communication networks are detailed by Ahmed et al.15 and Lakhan et al.16, showcasing how federated learning strategies are applied in complex data environments. These studies collectively provide a comprehensive overview of current challenges and technological developments that influence the field.
The measurement bias originates in how users choose, employ and measure particular features17. An example of this bias has been observed in the software tool used in the courts of the United States for predicting the chance of reoffending. This tool, named the Correctional Offender Management Profiling for Alternative Sanctions, considered ‘prior arrests’ and ‘family or friend arrests’ as a proxy variable to quantify the level of ‘riskiness’ or ‘crime’ for the future. Such a consideration can be viewed as mismeasured proxies since police surveil minority communities more often, making them higher arrest rates. However, due to the way they are assessed and controlled, one should not conclude that people from minority groups have higher arrest rates; therefore, they are more dangerous17. The representation bias arises if we select a non-representative sample from a population during the data collection phase. Such samples might generate a high accuracy for the training data. However, when adopted for real-world applications, they might reveal poor performance18. The sampling bias is similar to the aggregation bias and arises due to the non-random sampling of the subgroups within the population. The aggregation bias can be seen in clinical aid tools where a false conclusion may be drawn about an individual based on the entire population observation. For example, the HbA1c level is widely used to monitor and diagnose diabetes19. However, its value significantly differs in a complex way across various ethnic groups and genders. Therefore, a model that does not consider these two factors will not be a good fit for all ethnic and gender groups in the population. The linking bias arises when a model uses network attributes about individuals for prediction. A network attribute sometimes does not truly represent the activities and involvement of the underlying node or individual within the network. Wilson et al.20 show that users show significantly different interactions, in terms of method of interaction and time, compared with their social link patterns. Such a bias rooted in a network can result from many factors, such as network sampling21,22, which can make a notable change in the underlying network measures. The omitted variable bias occurs when the model does not consider an essential variable for the prediction23,24,25. A specific instance occurred when someone developed a model for the sale volume for a suburban restaurant with relatively high accuracy. Unexpectedly, the model shows poor predictive performance, although values for the model attributes remain almost unchanged. A further investment revealed that a new restaurant in the same area with a competitive price is the main reason for this worse prediction performance. The original model did not consider this feature.
An ML algorithm can exhibit various fairness levels when applied to different datasets26. A dataset could reveal disparate fairness levels against different ML algorithms. Even for the same dataset, one of its protected attributes (e.g., gender) might significantly lack fairness, while another (e.g., ethnicity) produces a fair outcome. Researchers followed different approaches to identify fairness. Zhang et al.27 proposed an explorative approach that can discover the potential biases, provide the underlying possible reasons for their presence and mitigate the most important one. D’Amour et al.28 used simulation to explore the long-term behaviour of the deployed ML-based decision systems. Researchers also suggest descriptive and prescriptive approaches, such as in29,30,31,32, for fighting against the ML fairness issue. Nonetheless, there is currently no proposed method capable of statistically establishing the existence of unfairness in supervised ML algorithms. Unlike existing approaches, which primarily explore potential biases or simulate long-term behaviours without establishing statistical proof of fairness or unfairness, our methodology integrates robust statistical testing within a cross-validation framework. Such a methodological approach allows for detecting and quantifying the degree of fairness in a statistically significant manner, considering various protected attributes such as gender and ethnicity. This capability to provide concrete statistical evidence of fairness sets our method apart, underscoring its novelty in a landscape where descriptive and prescriptive approaches have been prevalent but insufficient in statistically validating the fairness of supervised ML algorithms.
Given the shortcomings in current methodologies for evaluating fairness in AI systems, particularly the lack of statistical proof of fairness, we developed the following research objectives:
-
Can we develop a robust statistical testing methodology integrated within a cross-validation framework to detect, quantify, and address biases in ML algorithms?
-
Can this methodology be empirically validated across various datasets to ensure it effectively tests and demonstrates fairness in ML algorithms, considering different protected attributes such as gender and ethnicity?
-
How does the proposed methodology compare to existing approaches primarily focusing on identifying potential biases or simulating long-term behaviours without providing concrete statistical evidence of fairness?
This research introduces a simple yet original and inventive approach to detecting the existence of unfairness in a statistically significant manner by integrating robust statistical testing within a cross-validation setup for addressing these objectives.
Definition of fair machine learning
Fairness in ML, rooted in the philosophical and psychological discussions of equity and justice, lacks a universal definition within its domain. ML fairness is concerned with ensuring equitable treatment across all individuals, particularly in decision-making contexts that affect groups based on legally protected characteristics, such as gender, ethnicity, and socioeconomic status. To clarify, we differentiate three primary types of fairness: individual fairness, which ensures similar treatment for similar individuals; group fairness, which aims for proportional outcomes across different demographic groups to prevent systemic discrimination; and subgroup fairness (or intersectional fairness), which extends protections to intersecting group identities (e.g., Black women or disabled veterans), ensuring that combined characteristics do not lead to compounded biases8. This categorisation helps articulate the specific applications and implications of fairness, which is crucial for implementing sensitive and just ML practices in diverse real-world applications8. All fairness definitions in ML rely on simple or compound metrics associated with the confusion matrix, as illustrated in Fig. 1. The fairness definitions commonly employed in the algorithmic perspective within the ML context are outlined below:
Definition 1 (Equalised Odds)
For a given dataset, the deployment of a supervised ML algorithm will be fair if, for the protected and unprotected groups (e.g., male and female), it shows equal values for true positive rate (TPR) and false positive rate (FPR)33. TPR is the proportion of actual positive instances correctly identified by a classification model out of the total number of actual positive instances. FPR is the proportion of negative cases incorrectly identified as positive out of the total negative instances.
Definition 2 (Equal Opportunity)
For a given dataset, deploying a supervised ML algorithm will be fair if, for the protected and unprotected groups, it shows equal values for TPR33.
Definition 3 (Treatment Equality)
For a given dataset, deploying a supervised ML algorithm will be fair if, for the protected and unprotected groups, it shows equal values for the ratio between false negatives and false positives34.
Definition 4 (Comprehensive)
The first (TPR) of the two associated metrics with the first fairness definition (TPR and FPR) is the only metric for the second definition. The third definition also has a single metric: the ratio between false negatives and false positives. This study considered a fourth definition by aggregating all conditions from these three definitions. According to this definition, which is a comprehensive way to define fairness, employing a supervised ML algorithm on a given dataset will generate a fair outcome if, for the protected and unprotected groups, it shows equal values for (a) TPR, (b) FPR and (c) the ratio between false negatives and false positives.
Proposed fairness evaluation approach
The protected and unprotected groups could show a very high or low difference for the three metrics (TPR, FPR and the ratio between false negatives and false positives) used to define fairness in ML. However, it is impossible to establish a statistically significant difference using only one value instance. We need multiple instances of this value difference to explore whether there is a statistically significant difference between protected and unprotected groups for any of these three metrics.
Our proposed approach is, therefore, designed to generate multiple instances of three key metrics, enhancing the robustness and reliability of our fairness assessments. To achieve this, we utilise an increased k value in the k-fold cross-validation process during the training phase when implementing a supervised ML algorithm on a specific dataset. In the Discussion section, we outline the criteria for selecting the optimal k value for k-fold cross-validation in our proposed methodology. By setting a higher k value, we ensure that each of the three metrics is calculated multiple times during the validation stage, providing a comprehensive view of the model performance across different subsets of the data. The k-fold cross-validation is a well-established technique to evaluate the performance of a predictive model35. During each iteration of the validation process, (k-1) of these folds are used to train the model, while the remaining fold is used as a validation set to test the model performance. This cycle is repeated k times, with each fold serving as the validation set once, ensuring that all data points are used for training and validation (Fig. 2). Ultimately, the strength of k-fold cross-validation, coupled with our approach to selecting k, positions our methodology as a robust tool for developing and validating ML models, particularly in applications where fairness and unbiased performance are paramount.
Once we have multiple instances of these three metrics, we can employ statistical tests to check whether there is a statistically significant difference between the protected and unprotected groups. If the underlying attribute has only two groups, we can apply the independent-sample t-test; otherwise, one-way analysis of variance (ANOVA) or the Kruskal–Wallis test36. A t-test is a statistical test used to determine if there is a significant difference between the means of two groups for a given attribute37. One-way ANOVA is an extension of the t-test used to compare the means of three or more groups. The key idea behind the one-way ANOVA is to partition the total variability observed in the data into two components: the variability between group means and the variability within each group. If the former is significantly larger, it suggests a significant difference among group means36. The Kruskal–Wallis test is a non-parametric test used to compare the median values of three or more groups. When the assumption of the normality or equal variance is not met, this test is followed as an alternative to ANOVA36. A p-value below a certain threshold for these tests (t-test, ANOVA or Kruskal–Wallis) indicates a statistically significant difference in treatment or outcomes between groups. However, a p-value above this threshold does not confirm the absence of meaningful differences or imply fairness. These tests cannot distinguish between two test outcomes with p-values of 0.06 and 0.46 when a value of 0.05 is considered significant, which is a potential limitation of any statistical test. The contextual sensitivity of the underlying data should inform the selection of the p-value threshold (e.g., 0.05, 0.01 or 0.001) for determining statistical significance.
To illustrate thoroughly how the suggested approach operates, consider a dataset containing 12 attributes, one of which is race, having two potential values (white and black). The target attribute is a binary variable indicating the approval or denial of a home loan application. We created an AI system employing the support vector machine (SVM) algorithm. This system can determine the approval or rejection of a loan application based on the provided values for these 12 attributes. Suppose we investigate whether the SVM-based AI system produces a fair outcome against the categorical race attribute. We first need to split the data into two subsets: one for white people and the other for black people. Then, for each subgroup, we train the system through the k-fold cross-validation with \(k=20\). This training approach will eventually create 20 values for each of the three metrics (TPR, FPR and the ratio between false negatives and false positives) for each group. Since we have only two subgroups (white and black), we must apply the independent-sample t-test to investigate any statistically significant difference between the two groups for any of these three metrics. A statistically significant result indicates that the developed SVM-based AI system produces unfair outcomes for the race attribute of the given dataset. Figure 3 outlines the steps for this example.
Application of the proposed fairness evaluation approach
This study considered five open-access datasets from the Kaggle (4) and the UCI Machine Learning repository (1) to demonstrate an application of the proposed fairness evaluation approach. Kaggle is a platform that offers robust tools and resources for the data science and AI community, including over 300,000 open-access datasets38. All four Kaggle datasets are from a disease prediction context and aim to make a binary prediction. Gender is the protected feature for the first three datasets. It is the race for the fourth one. The remaining dataset is from the UCI Machine Learning Repository, which compiles more than 650 open-access datasets, providing the ML with ample resources for empirical exploration39. Race is the protected feature for this dataset. Table 1 details these five datasets. We share the corresponding code on GitHub (https://github.com/haohuilu/fairml/).
We considered six classical ML algorithms to illustrate the application of the proposed fairness evaluation approach. They are SVM, Logistic regression (LR), Decision tree (DT), Random forest (RF), k-nearest neighbour (KNN) and Artificial neural network (ANN). Our proposed approach for assessing fairness can determine whether deploying one or more of these six ML algorithms against any of the five datasets will yield a biased or unfair outcome for the underlying protected attribute (i.e., gender for the first two and last datasets or race for the third and fourth datasets). We applied the default settings for hyperparameters as provided by the Scikit-learn library45. This approach minimises the potential for bias that can be introduced through extensive hyperparameter tuning. The supplementary material includes a comprehensive description of all parameter settings and configurations used for the classifiers. Further, we have compiled a comprehensive tabular summary of prominent ML algorithms to enhance the theoretical perspective of our analysis, detailing their respective approaches to quantifying fairness (see Table 2). The table covers various algorithms and the methodologies used to assess fairness across different demographic groups. This study chose a k value of 20 for the first three and fifth datasets. For the fourth one, it is ten since one of the target subgroups has a small number of instances. We used the default parameter settings of Scikit-learn in implementing these ML algorithms against the selected datasets46.
Based on the four definitions (as described in Sect.“Definition of Fair Machine Learning”), Table 3 presents the corresponding fairness outcome for the six ML algorithms against the five datasets. The first definition (Equalised Odds) involves two measures (TPR and FPR). Hence, it requires two independent-sample t-tests. The second (Equal Opportunity) and third (Treatment Equality) definitions require one t-test each since each considers only one measure for comparison. TPR is the only measure for the second measure. It is also present in the first definition. The last definition (Comprehensive) needs three t-tests, aggregating all measures from the first three definitions. Supplementary Fig. 1 shows the plots for the three metrics (TPR, FPR, and the ratio between false negatives and false positives) used in these four fairness definitions against five research datasets. Adopted DT and RF reveal a fair outcome against all four fairness definitions for datasets D1, D2 and D5. DT further shows a similar result for dataset D3. ANN shows the same results same datasets D1 and D2. For KNN, it is datasets D1 and D5.
Among the selected ML algorithms, SVM is the only one that demonstrates unfair outcomes against one or more fairness definitions for all five datasets, as indicated in the second column of Table 3a–e. Notably, SVM exhibits unfair outcomes concerning all four fairness definitions for datasets D4 and D5, for whom race is the protected attribute. Following SVM, LR shows an unfair result most time. LR displays at least one unfair result for datasets D1-D4, with datasets D1 and D2 showing an unfair result across all four fairness definitions. Dataset D4 revealed an unfair outcome most times (20 times out of 24), followed by dataset D3 (seven times).
Although our proposed approach is demonstrated primarily for binary classification tasks and single protected variables, it can be applied to a more complex scenario, such as against classification tasks with more than two categories and/or multiple protected variables with two or more groups. When dealing with multi-class classification, the only change is in the dimensionality of the confusion matrix, which alters the calculation of metrics such as TPR, FPR, FP, and FN. However, if we have more than one protected variable (say two), each with two groups, we will have four sets of values (2 × 2) for each of the three metrics (TPR, FPR, and the ratio between false negatives and false positives). Similarly, if we have three projected variables with two groups each, there will be six sets of values (3 × 2) for these three metrics. In such cases, ANOVA or a similar statistical test should be used for statistical comparison instead of the t-test to accommodate the increased complexity.
Discussion
Based on the k-fold cross-validation and the statistical t-test, our study tackles a pertinent research issue within the domain of fair ML by introducing a classical fairness evaluation methodology. We implement the proposed approach on five benchmark datasets, with gender as the protected attribute in three and race as the protected attribute in the remaining two. This study observed variability in fairness outcomes across four different fairness definitions and six ML algorithms for the same dataset.
The selected k value in the k-fold cross-validation within the proposed fairness evaluation approach may vary or be reduced based on factors such as dataset size and subclass statistics. If one of the subgroups based on the underlying protected variable is small (e.g., under 100), a higher k value will leave a few data instances for the validation phase. For D5, we considered k = 10, while for the other four datasets, it is 20. The ultimate goal of any ML algorithm is to develop a model that will perform well with the training and new unseen data. In this regard, selecting an appropriate k value is very crucial. A high value (k = n) can lead to a higher bias since the underlying model has been trained on almost the entire dataset in each fold, and there is only an instance for validation47. In ML, bias is the difference between the average prediction of the model and the correct it is trying to predict. However, the variance, defined as the changes in model performance when using different portions of the training data, is likely to be lower with a high k value because the model is trained on an extensive set of data in each fold47. A lower value (k = 2 or 3) could also lead to a higher bias as the model may not capture the underlying patterns effectively within the data. The variance tends to be higher since the model is trained on smaller subsets, making it more sensitive to variations in the training data. Hence, a bias-variance trade-off is significant in the real-world application contexts of ML-based applications. The selected k value of the k-fold cross-validation is a fundamental factor in achieving this trade-off48. Our illustration of the proposed fairness evaluation approach used a k value of 10 and 20. This study did not consider the bias-variance trafe-off factor in selecting these values. Such consideration could have an impact on Table 3 findings.
The results from the statistical test underscore a pivotal debate in the fairness of ML algorithms: different ML models may exhibit varying degrees of fairness when applied to the same dataset. This variation emphasises the intricate relationship between model architecture and fairness outcomes. For instance, in D1, methods such as DT, RF, KNN and ANN demonstrate fairness, as indicated by "–" (meaning p > 0.05) in Table 3a. However, LR exhibits unfairness across all four definitions, whereas SVM only shows unfairness for Definition 3 (Treatment Equality), illustrating the variability in how different models align with or deviate from fairness criteria. This discrepancy can be attributed to how each model processes the underlying data and the sensitivity of each model to the protected attributes. This scenario opens up a complex landscape where the inherent characteristics of a model significantly influence its fairness, suggesting that no one-size-fits-all solution exists for achieving fair ML.
The debate over varying fairness outcomes for different ML algorithms for the same dataset leads to a broader discussion on defining fairness in ML. Fairness is not a monolithic concept but rather multifaceted, with various definitions applicable depending on the context. The variance in fairness outcomes across different ML models for the same dataset accentuates the challenge of adopting a universal fairness definition. For instance, one definition of fairness may prioritise equal outcomes (Definition 1 Equalised Odds) across groups, while another might emphasise equal opportunities (Definition 2–Equal Opportunity) to achieve these outcomes. This complexity is further compounded by the technical characteristics and assumptions embedded within each ML model, which may align or conflict with a specific fairness definition. One of the key strengths of our proposed methodology is its adaptability to various fairness definitions beyond the four we have explored in this study. It is designed to accommodate future fairness metrics that may differ from the three (TPR, FPR, and the ratio between false negatives and false positives) we currently utilise. This flexibility ensures that our approach remains applicable and relevant as new definitions and metrics emerge.
Further complicating the discussion is the consideration of ensemble approaches, which combine multiple ML models. The variance in individual model fairness outcomes raises the question of how ensemble methods, integrating potentially fair and unfair models according to specific definitions, impact overall fairness. Ensemble methods, such as RF, are designed to improve prediction accuracy by aggregating the predictions of several models49. However, the fairness of these aggregated predictions remains an open question, especially if the ensemble includes a mix of models that individually exhibit both fair and unfair outcomes. This highlights a crucial area for further research: understanding and mitigating potential biases introduced through ensemble methods, which might not only inherit but also amplify the biases of their constituent models.
How a machine learning model works can significantly contribute to its fairness. The architectural intricacies of a model, including how it learns from data and makes predictions, can inherently influence its fairness outcomes. More transparent and interpretable models like DT may offer more precise insights into how fairness is achieved or compromised. In contrast, more complex models, such as deep learning models like ANN, may obscure the mechanisms leading to fair or unfair outcomes. This understanding is pivotal in devising strategies to enhance fairness, such as feature selection, modifying the learning algorithm, or applying post-processing fairness corrections.
The discussion on ML fairness is incomplete without acknowledging that the definition of fairness is context-dependent. Different applications and domains may require different fairness considerations, reflecting varied societal norms and ethical considerations. For example, fairness in a healthcare application might focus on equal predictive accuracy across different racial groups. In contrast, fairness might prioritise equal loan approval rates across genders in a financial application. This diversity necessitates a flexible approach to defining and achieving fairness tailored to the specific needs and impacts of the underlying ML applications.
The fairness of ML algorithms is a multifaceted issue influenced by the choice of model, the definition of fairness, and the application context. This research highlights limitations, including the need for cross-validation with real-world evidence to bolster the robustness of fairness assessments. Additionally, the choice of k in k-fold cross-validation emerges as a critical factor influencing t-test results and fairness interpretations. This test may not be able to detect small but potentially meaningful differences, especially in scenarios where sample sizes are limited, or effect sizes are small, despite its potential power in statistical comparison and wide usage in various contexts50. Another possible limitation is that the variability observed in the three metrics could stem from factors other than the algorithmic implementation techniques. This includes, among others, the algorithmic sensitivity to specific subsets of data, the inherent instability of the algorithm, non-determinism in training due to uninitialised pseudo-random number generators, and effects of parallelisation or GPU usage. Future research should focus on these implementation issues and develop more flexible fairness definitions that accommodate diverse ML applications. In addition, we aim to incorporate explainable artificial intelligence techniques to enhance the transparency and understandability of ML-based models, aligning with best practices such as those detailed in the literature51. This inclusion will improve model interpretability and ensure our methodologies adhere to the ethical standards and guidelines established in the field of responsible AI. Moreover, exploring methods to enhance the fairness of ensemble approaches and investigating the impact of model architecture on fairness outcomes is imperative. This endeavour will contribute to advancing fair ML practices and ensure that ML applications enhance equity and justice across all sectors of society.
Conclusion
In conclusion, this research introduces a novel approach to assess fairness in ML algorithms, demonstrating its application across five diverse benchmark datasets. The findings underscore the complexity of achieving fairness, evidenced by the variability in fairness outcomes across different ML models and fairness definitions. Key insights include the lack of a universally fair ML model, the contextual nature of fairness, the challenges posed by ensemble methods, and the impact of model architecture on fairness outcomes. The research highlights the importance of adopting a nuanced perspective on ML fairness that is sensitive to model selection, fairness definitions, and application contexts. Limitations such as the need for further validation with real-world datasets and the influence of parameter selection on fairness assessments suggest areas for future research. These include developing adaptive fairness definitions to suit varied applications, addressing issues related to algorithmic implementation techniques, enhancing fairness in ensemble methods, and exploring the relationship between model architecture and fairness. Addressing these areas will advance the field towards more equitable ML practices, ensuring that AI technologies contribute positively to societal needs across all domains.
Data availability
The datasets analysed during the current study are open-access and available in the Kaggle and UCI Machine Learning repositories.
References
Helm, J. M. et al. Machine learning and artificial intelligence: Definitions, applications, and future directions. Curr. Rev. Musculoskelet. Med. 13, 69–76 (2020).
Lu, H. & Uddin, S. A parameterised model for link prediction using node centrality and similarity measure based on graph embedding. Neurocomputing 593, 127820 (2024).
Uddin, S., Yan, S. & Lu, H. 2024 Machine learning and deep learning in project analytics: methods, applications and research trends. Prod. Plan. Control https://doi.org/10.1080/09537287.2024.2320790 (2024).
Uddin, S. et al. Comorbidity and multimorbidity prediction of major chronic diseases using machine learning and network analytics. Expert Syst. Appl. 205, 117761 (2022).
Bogen, M. and Rieke, A., Help wanted: An examination of hiring algorithms, equity, and bias, in Analysis & Policy Observatory. 2018. p. 1–73.
Cohen, L., Lipton, Z.C., and Mansour, Y. Efficient candidate screening under multiple tests and implications for fairness. in 1st Symposium on Foundations of Responsible Computing. 2019. DagstuhlPublishing, Germany.
Angwin, J., Larson, J., Mattu, S. & Kirchner, L. Machine bias. In Ethics of data and analytics (eds Angwin, J. et al.) (Auerbach Publications, 2022).
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. & Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021).
Kapoor, S. & Narayanan, A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns 4(9), 100804 (2023).
Wijata, A. M. & Nalepa, J. Unbiased validation of the algorithms for automatic needle localization in ultrasound-guided breast biopsies. In 2022 IEEE International Conference on Image Processing (ICIP) (eds Wijata, A. M. & Nalepa, J.) (IEEE, 2022).
Nalepa, J., Myller, M. & Kawulok, M. Validating hyperspectral image segmentation. IEEE Geosci. Remote Sens. Lett. 16(8), 1264–1268 (2019).
Pessach, D. & Shmueli, E. A review on fairness in machine learning. ACM Comput. Surv. (CSUR) 55(3), 1–44 (2022).
Caton, S. & Haas, C. Fairness in machine learning: A survey. ACM Comput. Surv. 56(7), 1–38 (2024).
Zhang, N. & Sun, S. Multiview unsupervised shapelet learning for multivariate time series clustering. IEEE Trans. Pattern Anal. Mach. Intell. 45(4), 4981–4996 (2022).
Ahmed, S., Groenli, T.-M., Lakhan, A., Chen, Y. & Liang, G. A reinforcement federated learning based strategy for urinary disease dataset processing. Comput. Biol. Med. 163, 107210 (2023).
Lakhan, A. et al. Federated learning enables intelligent reflecting surface in fog-cloud enabled cellular network. PeerJ Comput. Sci. 7, e758 (2021).
Suresh, H. and Guttag, J., A framework for understanding sources of harm throughout the machine learning life cycle, in Equity and access in algorithms, mechanisms, and optimization. 2021. p. 1–9 (ACM).
Shahbazi, N., Lin, Y., Asudeh, A. & Jagadish, H. Representation bias in data: A survey on identification and resolution techniques. ACM Comput. Surv. https://doi.org/10.1145/3588433 (2023).
Sherwani, S. I., Khan, H. A., Ekhzaimy, A., Masood, A. & Sakharkar, M. K. Significance of HbA1c test in diagnosis and prognosis of diabetic patients. Biomark. Insights 11, BMI.S38440 (2016).
Wilson, C., Boe, B., Sala, A., Puttaswamy, K.P., and Zhao, B.Y. User interactions in social networks and their implications. in Proceedings of the 4th ACM European conference on Computer systems. (ACM) (2009).
González-Bailón, S., Wang, N., Rivero, A., Borge-Holthoefer, J. & Moreno, Y. Assessing the bias in samples of large online networks. Soc. Netw. 38, 16–27 (2014).
Morstatter, F., Pfeffer, J., Liu, H., and Carley, K. Is the sample good enough? Comparing data from twitter's streaming API with twitter's firehose. in Proceedings of the international AAAI conference on web and social media. (MIT Press) (2013).
Clarke, K. A. The phantom menace: Omitted variable bias in econometric research. Confl. Manag. Peace Sci. 22(4), 341–352 (2005).
Mustard, D. B. Reexamining criminal behavior: The importance of omitted variable bias. Rev. Econ. Stat. 85(1), 205–211 (2003).
Riegg, S. K. Causal inference and omitted variable bias in financial aid research: Assessing solutions. Rev. High. Educ. 31(3), 329–354 (2008).
Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., and Roth, D., A comparative study of fairness-enhancing interventions in machine learning, in Proceedings of the conference on fairness, accountability, and transparency p. 329–338 (ACM) (2019).
Zhang, H., Shahbazi, N., Chu, X., and Asudeh, A., FairRover: explorative model building for fair and responsible machine learning, in Proceedings of the Fifth Workshop on Data Management for End-To-End Machine Learning p. 1–10 (ACM) (2021).
D'Amour, A., Srinivasan, H., Atwood, J., Baljekar, P., Sculley, D., and Halpern, Y., Fairness is not static: deeper understanding of long term fairness via simulation studies, in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. p. 525–534 (ACM) (2020).
Srivastava, M., Heidari, H., and Krause, A., Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning, in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. p. 2459–2468 (ACM) (2019).
Ghani, R., Rodolfa, K.T., Saleiro, P., and Jesus, S., Addressing bias and fairness in machine learning: A practical guide and hands-on tutorial, in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. p. 5779–5780 (ACM) (2023).
Deng, W.H., Nagireddy, M., Lee, M.S.A., Singh, J., Wu, Z.S., Holstein, K., and Zhu, H., Exploring how machine learning practitioners (try to) use fairness toolkits, in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022. p. 473–484 (ACM) (2022).
Dolata, M., Feuerriegel, S. & Schwabe, G. A sociotechnical view of algorithmic fairness. Inf. Syst. J. 32(4), 754–818 (2022).
Hardt, M., Price, E., and Srebro, N., Equality of opportunity in supervised learning, in Advances in neural information processing systems. p. 3315–3323 (ACM) (2016).
Berk, R., Heidari, H., Jabbari, S., Kearns, M. & Roth, A. Fairness in criminal justice risk assessments: The state of the art. Sociol. Methods Res. 50(1), 3–44 (2021).
Browne, M. W. Cross-validation methods. J. Math. Psychol. 44(1), 108–132 (2000).
Field, A. Discovering statistics using SPSS (Sage Publications Ltd., 2013).
Privitera, G.J., Statistics for the behavioral sciences. 2023: Sage Publications.
Kaggle. 2023; Available from: https://www.kaggle.com/
Kelly, M., Longjohn, R., and Nottingham, K. The UCI Machine Learning Repository. 2023; Available from: https://archive.ics.uci.edu
Mustafa, M. Diabetes prediction dataset (Source: Kaggle). 2023; Available from: https://www.kaggle.com/datasets/iammustafatz/diabetes-prediction-dataset/data
Svetlana, U. Cardiovascular disease dataset (Source: Kaggle). 2019; Available from: https://www.kaggle.com/datasets/sulianova/cardiovascular-disease-dataset
Pytlak, K. Key Indicators of Heart Disease. 2024; Available from: https://www.kaggle.com/datasets/kamilpytlak/personal-key-indicators-of-heart-disease/data
Islam, F. Starter: Diabetes 130 US hospitals (Source: Kaggle). 2024; Available from: https://www.kaggle.com/code/fakhrul77/starter-diabetes-130-us-hospitals-for-4e0c2549-f
Tasci, E., Zhuge, Y., Kaur, H., Camphausen, K. & Krauze, A. V. Hierarchical voting-based feature selection and ensemble learning model scheme for glioma grading with clinical and molecular characteristics. Int. J. Mol. Sci. 23(22), 14155 (2022).
Pedregosa, F. et al. Scikit-learn: Machine learning in Python. J. mach. Learn. Res. 12, 2825–2830 (2011).
Pedregosa, F. et al. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
Wong, T.-T. & Yeh, P.-Y. Reliable accuracy estimates from k-fold cross validation. IEEE Trans. Knowl. Data Eng. 32(8), 1586–1594 (2019).
Belkin, M., Hsu, D., Ma, S. & Mandal, S. Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proc. Natl. Acad. Sci. 116(32), 15849–15854 (2019).
Breiman, L. Random forests. Machine learning 45(1), 5–32 (2001).
Kim, T. K. T test as a parametric statistic. Korean J. Anesthesiol. 68(6), 540 (2015).
Hryniewska, W. et al. Checklist for responsible deep learning modeling of medical images based on COVID-19 detection studies. Patt. Recognit. 118, 108035 (2021).
Author information
Authors and Affiliations
Contributions
SU: Study Conception, Study Design, Data Analysis, Writing, and Reviewer response HL: Data Analysis, Writing, and Reviewer response AR: Writing and Reviewer response JG: Writing and Reviewer response.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Uddin, S., Lu, H., Rahman, A. et al. A novel approach for assessing fairness in deployed machine learning algorithms. Sci Rep 14, 17753 (2024). https://doi.org/10.1038/s41598-024-68651-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-024-68651-w
Keywords
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.