My current focus is on foundations and tools for accountable data-driven systems. The goal is to ensure that data-driven systems that employ artificial intelligence and machine learning are not inscrutable black-boxes; rather their operation is explained in a form that enables trust in their operation and protection of societal values, including privacy and fairness. For contributions in this area, I am the recipient of the 2018 David P. Casasent Outstanding Research Award from the CMU College of Engineering. My course covers the state-of-the-art in this area for deep learning technologies.
- Our 2016 paper on Algorithmic Transparency via Quantitative Input Influence introduced a method for Explainable Machine Learning by leveraging a combination of techniques from co-operative game theory (e.g., Shapley Values) and causality. These methods are now widely used.
- Our paper from 2015 on Discrimination in Online Behavioral Advertising was an early demonstration that fairness of data-driven systems that use machine learning is a real problem -- a well-recognized and mainstream area of research and practice now.
Note: I am on leave from CMU at Truera (renamed from AILens), a company I co-founded to enable effective and responsible adoption of artificial intelligence.
- Influence-directed Explanations [Application: Deep Convolutional Networks]
- Algorithmic Transparency via Quantitative Input Influence [The Conversation] [FAT/ML'16 Invited Talk]
- Algorithmic Accountability via Information Flow Experiments [Application: FAQ on Discrimination in Online Behavioral Advertising]
- Bootstrapping Privacy Compliance in Big Data Systems [Application: Web privacy, in particular, deployed compliance tool for Bing]
- Privacy and Contextual Integrity [ The Economist ][White House Consumer Privacy Bill of Rights]
Selected Recent Talks
- 2018: The Economist Innovation Summit, Fairness/Gender and Machine Learning@Stanford, Machine Learning and Formal Methods Summit@Oxford, PrivaCI@Princeton, Interpretable Machine Learning Models and Financial Applications@MIT, International Test Conference (AI session), UW-Madison CS Distinguished Lecture Series, CMKL Tech Summit on AI@Thailand
- 2017: DARPA Safe Machine Learning, Data Privacy@Simons Institute, Data Economy@Telecom ParisTech, PLSC@Berkeley, Algorithms and Explanations@NYU
- 2016: FAT/ML@NYU, BigData@CSAIL Data Privacy Series at MIT, Safe AI@CMU + White House OSTP, Formal Methods and Security@PLDI'16, Security and Human Behavior'16@Harvard, Privacy Engineering@Oakland'16, John Mitchell Festscrift@Stanford, Science of Security@CPSWeek'16, FTC PrivacyCon'16
- Accountable Decision Systems [Lead PI; NSF large collaborative involving CMU, Cornell, ICSI]
- Conference on Fairness, Accountability, and Transparency [Steering Committee]
- Foundations and Trends in Privacy and Security [Editor-in-Chief]
- IEEE Computer Security Foundations Symposium [Steering Committee]
- Accountable Protocol Customization [Lead PI; ONR large collaborative involving CMU, Stanford, UPenn]
- CMU Security and Privacy Institute, Principles of Programming, Artificial Intelligence group [Affiliate]
- Carnegie Mellon University Silicon Valley [ECE Leadership Team]