New York, March 27, 2024 – ACM, the Association for Computing Machinery, and IMS, the Institute of Mathematical Statistics, have announced the publication [.]
Machine learning algorithms are widely used for decision making in societally high-stakes settings from child welfare and criminal justice to healthcare and consumer lending. Recent history has illuminated numerous examples where these algorithms proved unreliable or inequitable. This talk will show how causal inference enables us to more reliably evaluate such algorithms’ performance and equity implications. In the first part of the talk, it will be demonstrated that standard evaluation procedures fail to address missing data and as a result, often produce invalid assessments of algorithmic performance. A new evaluation framework is proposed that addresses missing data by using counterfactual techniques to estimate unknown outcomes. Using this framework, we propose counterfactual analogues of common predictive performance and algorithmic fairness metrics that are tailored to decision-making settings. We provide double machine learning-style estimators for these metrics that achieve
E-Mail
Information on individuals mobility where they go as measured by their smartphones has been used widely in devising and evaluating ways to respond to COVID-19, including how to target public health resources. Yet little attention has been paid to how reliable these data are and what sorts of demographic bias they possess. A new study tested the reliability and bias of widely used mobility data, finding that older and non-White voters are less likely to be captured by these data. Allocating public health resources based on such information could cause disproportionate harms to high-risk elderly and minority groups.
The study, by researchers at Carnegie Mellon University (CMU) and Stanford University, appears in the