• Alexandra Chouldechova

  • Estella Loomis McCandless Assistant Professor of Statistics and Public Policy
  • Heinz College, Carnegie Mellon University
  • Office: Hamburg Hall 2224
  • Email: achould(at)cmu.edu
  • Phone: 412-268-4414
  • CV Google Scholar

Brief Academic Bio

Dr. Alexandra Chouldechova is the Estella Loomis McCandless Assistant Professor of Statistics and Public Policy at Carnegie Mellon University's Heinz College of Information Systems and Public Policy. Her research investigates questions of algorithmic fairness and accountability in data-driven decision-making systems, with a domain focus on criminal justice and human services. Her work has been supported through funding from organizations including the Hillman Foundation, the MacArthur Foundation, and the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon. She is a member of the executive committee for the ACM Conference on Fairness, Accountability and Transparency (FAccT), and previously served as a Program Committee co-Chair for the conference.

Dr. Chouldechova is a 2020 Research Fellow with the Partnership on AI, where she is working on understanding factors that drive racial bias in algorithmic risk assessment tools being developed for use in pre-trial, parole and sentencing contexts. She is also a member of the Pittsburgh Task Force on Public Algorithms.

Dr. Chouldechova received her PhD in Statistics from Stanford University and an H.B.Sc. in Mathematical Statistics from the University of Toronto.

Research

My research investigates problems related to fairness in predictive modeling. Much of my work concerns the study of risk assessment tools in domains such as criminal justice, child welfare, health care, and financial services.

My research in this area to-date has studied questions falling into the following areas:

  • Quantitatively characterizing different notions of algorithmic bias in risk assessment tools, studying their relationships, implications, and estimation in missing or mislabeled data contexts.
  • Developing and critically examining fair learning algorithms.
  • Examining questions of fairness in human-in-the-loop systems to better understand how the introduction of algorithmic tools influences decision-making, and how various factors affect user and community perspectives on algorithmic systems, user uptake of the systems, and the quality of resulting decisions.

I'm also interested in applied statistics and statistical methodology, particularly in the areas of large scale multiple testing and high dimensional data analysis. My focus in this area is on non-standard testing setups where the hypotheses being tested are data-driven or structured (e.g., spatially or sequentially).

Publications and Preprints:

  • Counterfactual Predictions under Runtime Confounding
    with Amanda Coston and Edward Kennedy
    To Appear In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS 2020)
    (arXiv)
  • Hospital Injury Encounters of Children Identified by a Predictive Risk Model for Screening Child Maltreatment Referrals: Evidence from the Allegheny Family Screening Tool
    with Diana Benavides Prado, Emily Putnam-Hornstein, Rhema Vaithianathan and Rachel Berger
    To appear in JAMA Pediatrics (2020)
  • A case for humans-in-the-loop: decisions in the presence of erroneous algorithmic scores
    with Maria De-Arteaga and Riccardo Fogliato
    In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI 2020)
    (arXiv)
  • Counterfactual risk assessment, evaluation, and fairness
    with Amanda Coston, Alan Mishler and Edward Kennedy
    In Proceedings of the ACM Conference on Fairness, Accountability and Transparency (FAccT 2020)
    (arXiv)
  • Fairness evaluation in the presence of biased noisy labels
    with Riccardo Fogliato and Max G'Sell
    In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS 2020)
    (arXiv)
  • A Snapshot of the Frontiers of Fairness in Machine Learning: A Report from Philadelphia.
    with Aaron Roth
    Communications of the ACM (2020)
    (CACM)
  • What's in a name? Reducing bias in bios without access to protected attributes
    with Alexei Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai
    In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019)
    * Best thematic paper award
    (arXiv)
  • Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
    with Maria De-Arteaga, Alexei Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Sahin Geyik, Krishnaram Kenthapadi and Adam Kalai
    In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAT* 2019)
    (ACM DL)
  • Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-Making in Child Welfare Services
    with Anna Brown, Emily Putnam-Hornstein, Andrew Tobin and Rhema Vaithianathan
    In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI 2019)
    * Honorable mention (top 5% of submissions)
    (ACM DL)
  • Does mitigating ML's impact disparity require treatment disparity?
    with Zachary Lipton and Julian McAuley
    In Proceedings of the Conference on Neural Information Processing Systems (NIPS 2018)
    (arXiv)
  • Learning under selective labels in the presence of expert consistency
    with Maria De-Arteaga and Artur Dubrawski
    Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018)
    (arXiv)
  • A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions.
    with Diana Benavides Prado, Oleksandr Fialko, Emily Putnam-Hornstein and Rhema Vaithianathan
    In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* 2018)
    * Best technical & interdisciplinary paper
    (PMLR)
  • Fairer and more accurate, but for whom?
    with Max G'Sell
    Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017)
    (arXiv)
  • Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
    Journal version: Big Data, Special issue on Social and Technical Trade-Offs (2017)
    Workshop version: Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2016)
    (Journal: arXiv) (Workshop: arXiv)
  • Generalized additive model selection
    with Trevor Hastie
    Technical report
    (arXiv)
  • Safety and outcomes of mobile ECMO using a bicaval dual-stage venous catheter
    with Hussein D Kanji, Chris Harvey, Ephraim O'dea, Gail Faulkner, Giles Peek
    ASAIO Journal (2017)
  • Sequential selection procedures and false discovery rate control
    with Max G'Sell, Stefan Wager and Robert Tibshirani
    Journal of the Royal Statistical Society: Series B (2016)
    (arXiv)
  • Differences in search engine evaluations between query owners and non-owners.
    with David Mease
    In Proceedings of the ACM international conference on Web search and data mining (WSDM 2013)
    (ACM DL)
  • Early stem cell engraftment predicts late cardiac functional recovery: preclinical insights from molecular imaging.
    with Junwei Liu, Kazim H Narsinh, Feng Lan, Li Wang, Patricia K Nguyen, Shijun Hu, Andrew Lee, Leng Han, Yongquan Gong, Mei Huang, Divya Nag, Jarrett Rosenberg, Robert C Robbins, and Joseph C Wu
    Circulation: Cardiovascular Imaging (2012)
    (PubMed)

Thesis:

  • False Discovery Rate Control for Spatial Data, Stanford 2014.(pdf)

Teaching

Here's a list of the courses I'm currently teaching, along with links to the course webpages.