Christopher A. Choquette-Choo

According to our database1, Christopher A. Choquette-Choo authored at least 59 papers between 2019 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Efficient and Privacy-Preserving Soft Prompt Transfer for LLMs.
CoRR, June, 2025

Correlated Noise Mechanisms for Differentially Private Learning.
CoRR, June, 2025

Strong Membership Inference Attacks on Massive Datasets and (Moderately) Large Language Models.
CoRR, May, 2025

Lessons from Defending Gemini Against Indirect Prompt Injections.
CoRR, May, 2025

LLMs unlock new paths to monetizing exploits.
CoRR, May, 2025

Gemma 3 Technical Report.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
CoRR, March, 2025

Language Models May Verbatim Complete Text They Were Not Explicitly Trained On.
CoRR, March, 2025

Scaling Laws for Differentially Private Language Models.
CoRR, January, 2025

Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards.
CoRR, January, 2025

Measuring memorization in language models via probabilistic extraction.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025

Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Privacy Auditing of Large Language Models.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Scalable Extraction of Training Data from Aligned, Production Language Models.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Near-Exact Privacy Amplification for Matrix Mechanisms.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Near-Optimal Rates for O(1)-Smooth DP-SCO with a Single Epoch and Large Batches.
Proceedings of the International Conference on Algorithmic Learning Theory, 2025

Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice.
CoRR, 2024

Gemma 2: Improving Open Language Models at a Practical Size.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
CoRR, 2024

CodeGemma: Open Code Models Based on Gemma.
CoRR, 2024

Optimal Rates for DP-SCO with a Single Epoch and Large Batches.
CoRR, 2024

Phantom: General Trigger Attacks on Retrieval Augmented Language Generation.
CoRR, 2024

Gemma: Open Models Based on Gemini Research and Technology.
CoRR, 2024

Privacy Side Channels in Machine Learning Systems.
Proceedings of the 33rd USENIX Security Symposium, 2024

Poisoning Web-Scale Training Datasets is Practical.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Auditing Private Prediction.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Teach LLMs to Phish: Stealing Private Information from Language Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Privacy Amplification for Matrix Mechanisms.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Correlated Noise Provably Beats Independent Noise for Differentially Private Learning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

User Inference Attacks on Large Language Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

2023
Private Multi-Winner Voting for Machine Learning.
Proc. Priv. Enhancing Technol., January, 2023

Scalable Extraction of Training Data from (Production) Language Models.
CoRR, 2023

Report of the 1st Workshop on Generative AI and Law.
CoRR, 2023

Robust and Actively Secure Serverless Collaborative Learning.
CoRR, 2023

MADLAD-400: A Multilingual And Document-Level Large Audited Dataset.
CoRR, 2023

Are aligned neural networks adversarially aligned?
CoRR, 2023

(Amplified) Banded Matrix Factorization: A unified approach to private training.
CoRR, 2023

PaLM 2 Technical Report.
CoRR, 2023

Students Parrot Their Teachers: Membership Inference on Model Distillation.
CoRR, 2023

Students Parrot Their Teachers: Membership Inference on Model Distillation.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Robust and Actively Secure Serverless Collaborative Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

(Amplified) Banded Matrix Factorization: A unified approach to private training.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Are aligned neural networks adversarially aligned?
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy.
Proceedings of the 16th International Natural Language Generation Conference, 2023

Private Federated Learning with Autotuned Compression.
Proceedings of the International Conference on Machine Learning, 2023

Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning.
Proceedings of the International Conference on Machine Learning, 2023

Proof-of-Learning is Currently More Broken Than You Think.
Proceedings of the 8th IEEE European Symposium on Security and Privacy, 2023

Federated Learning of Gboard Language Models with Differential Privacy.
Proceedings of the The 61st Annual Meeting of the Association for Computational Linguistics: Industry Track, 2023

2022
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy.
CoRR, 2022

Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter Search.
CoRR, 2022

On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning.
CoRR, 2022

The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning.
Proceedings of the International Conference on Machine Learning, 2022

2021
Entangled Watermarks as a Defense against Model Extraction.
Proceedings of the 30th USENIX Security Symposium, 2021

Proof-of-Learning: Definitions and Practice.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

Machine Unlearning.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

Label-Only Membership Inference Attacks.
Proceedings of the 38th International Conference on Machine Learning, 2021

CaPC Learning: Confidential and Private Collaborative Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
Entangled Watermarks as a Defense against Model Extraction.
CoRR, 2020

2019
A Multi-label, Dual-Output Deep Neural Network for Automated Bug Triaging.
Proceedings of the 18th IEEE International Conference On Machine Learning And Applications, 2019


  Loading...