• Alexandra Chouldechova

  • Estella Loomis McCandless Assistant Professor of Statistics and Public Policy
  • Heinz College, Carnegie Mellon University
  • Office: Hamburg Hall 2224
  • Email: achould(at)cmu.edu
  • Phone: 412-268-4414


  • Ph.D. in Statistics, Stanford University, 2014
  • B.Sc. in Mathematical Statistics, University of Toronto, 2005-2009


My research focuses on problems related to fairness in predictive modeling. I work on better understanding how to assess black-box predictors for potentially unanticipated biases that could lead to discriminatory practices. Questions that I am actively investigating include:

  • Under what conditions can disparate impact arise?
  • How can we quantitatively characterize fairness?
  • How can we use such characterizations to develop improved systems that are less likely to result in disparate impact?

I'm also generally interested in applied statistics and statistical methodology, particularly in the areas of large scale multiple testing and high dimensional data analysis. My focus in this area is on non-standard testing setups where the hypotheses being tested are data-driven or structured (e.g., spatially or sequentially).

Publications and Preprints:

  • Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. A case for humans-in-the-loop: decisions in the presence of erroneous algorithmic scores.(To appear, CHI 2020)(arXiv)
  • Amanda Coston, Alexandra Chouldechova and Edward Kennedy. Counterfactual risk assessment, evaluation, and fairness.(To appear, FAT* 2020) (arXiv)
  • Riccardo Fogliato, Max G'Sell and Alexandra Chouldechova. Fairness evaluation in the presence of biased noisy labels. (To appear, AISTATS 2020) (pdf)
  • A. Romanov, M. De-Arteaga, H. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K. Kenthapadi, A. Rumshisky, A. Kalai.
    What's in a name? Reducing bias in bios without access to protected attributes
    In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics(NAACL 2019) (arXiv)
    Best thematic paper award
  • M. De-Arteaga, A. Romanov, H. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K. Kenthapadi, A. Kalai. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
    In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAT* 2019) (ACM DL)
  • Alexandra Chouldechova and Aaron Roth. A Snapshot of the Frontiers of Fairness in Machine Learning: A Report from Philadelphia.
    Communications of the ACM (2019)(arXiv)
  • Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-Making in Child Welfare Services
    In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI 2019)(Paper)
    * Honorable mention (top 5% of submissions)
  • Zachary Lipton, Alexandra Chouldechova, and Julian McAuley. Does mitigating ML's impact disparity require treatment disparity?
    In Proceedings of the Thirty-second Annual Conference on Neural Information Processing Systems (NIPS 2018)(arXiv)
  • Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova. Learning under selective labels in the presence of expert consistency
    Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018)(arXiv)
  • Alexandra Chouldechova, Diana Benavides Prado, Oleksandr Fialko, Emily Putnam-Hornstein, Rhema Vaithianathan. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions.
    Conference on Fairness, Accountability, and Transparency (FAT* 2018). (PMLR)
    * Best technical & interdisciplinary paper
  • Alexandra Chouldechova and M. G'Sell. Fairer and more accurate, but for whom?
    Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017) (arXiv)
  • Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
    Journal version: Big Data, Special issue on Social and Technical Trade-Offs (2017) (arXiv)
    Earlier workshop version: Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2016) (arXiv)
  • Alexandra Chouldechova, Trevor Hastie. Generalized additive model selection (Submitted) (arXiv)
  • Kanji, HD., Chouldechova, A., Harvey, C., Porter, R., Gratrix M., Faulkner G., Peek G. Safety and outcomes of mobile ECMO using a bicaval dual-stage venous catheter.
    ASAIO Journal (2017)
  • Max G'Sell, Stefan Wager, Alexandra Chouldechova, Robert Tibshirani. Sequential selection procedures and false discovery rate control
    Journal of the Royal Statistical Society: Series B (2016) (arXiv)
  • Alexandra Chouldechova, David Mease. Differences in search engine evaluations between query owners and non-owners.
    In Proceedings of the sixth ACM international conference on Web search and data mining. (WSDM 2013) (ACM DL)
  • Liu, J., Narsinh, K. H., Lan, F., Wang, L., Nguyen, P. K., Hu, S., Lee, A., Han, L., Gong, Y., Huang, M., Nag, D., Rosenberg, J., Chouldechova, A., Robbins, R. C., Wu, J. C. Early stem cell engraftment predicts late cardiac functional recovery: preclinical insights from molecular imaging.
    Circulation: Cardiovascular Imaging (2012) (PubMed)


  • False Discovery Rate Control for Spatial Data, Stanford 2014.(pdf)


Here's a list of the courses I'm currently teaching, along with links to the course webpages.