|
Meta, AI Research Intern. Topic: Privacy-preserving synthetic data generation using large language models (LLMs). Redmond, WA, Summer 2023 |
Meta, AI Research Intern. Topic: Federated Learning for large language models (LLMs). Redmond, WA, Fall 2022 |
Amazon, Applied Science Intern. Topic: Self-supervised learning for Learning-to-rank (LTR). Palo Alto, CA, Summer 2022 |
Amazon, Applied Science Intern. Topic: Reinforcement learning for sub-same day delivery optimization. Seattle, WA, Summer 2021 |
Uber, Research Intern. Topic: Deep Radar Simulation. San Francisco, CA, Summer 2019 |
arxiv |
Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity Charlie Hou, Kiran Koshy Thekumparampil, Michael Shavlovsky, Giulia Fanti, Yesh Dattatreya, Sujay Sanghavi Oral presentation at ICML 2023 workshop for preference-based learning |
arxiv |
Privately Customizing Prefinetuning to Better Match User Data in Federated Learning Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Sasha Livshits, Giulia Fanti, Daniel Lazar ICLR 2023 workshop for TrustML |
arxiv |
FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh ICLR 2022, Oral Presentation at ICML-FL workshop 2021 |
arxiv |
Efficient Algorithms for Federated Saddle Point Optimization
Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh Preprint |
arxiv |
SquirRL: Automating Attack Analysis on Blockchain Incentive Mechanisms with Deep Reinforcement Learning Charlie Hou*, Mingxun Zhou*, Yan Ji, Phil Daian, Florian Tramer, Giulia Fanti, Ari Juels (*equal contribution) NDSS 2021 |
Google Collabs Research Award ($80k grant and $20k in GCP credits) 2022. With Giulia Fanti and Sewoong Oh |
Tiger Chef Champion, 2018 |
Reviewer, NeurIPS 2023 |
Reviewer, ICLR 2023 |