Nicolas Papernot

Orcid: 0000-0001-5078-7233

Affiliations:
  • University of Toronto, Canada


According to our database1, Nicolas Papernot authored at least 143 papers between 2014 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
AI models collapse when trained on recursively generated data.
Nat., July, 2024

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness.
Trans. Mach. Learn. Res., 2024

From Differential Privacy to Bounds on Membership Inference: Less can be More.
Trans. Mach. Learn. Res., 2024

A False Sense of Safety: Unsafe Information Leakage in 'Safe' AI Responses.
CoRR, 2024

UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI.
CoRR, 2024

LLM Dataset Inference: Did you train on my dataset?
CoRR, 2024

Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model.
CoRR, 2024

Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy.
CoRR, 2024

Architectural Neural Backdoors from First Principles.
CoRR, 2024

Regulation Games for Trustworthy Machine Learning.
CoRR, 2024

Unlearnable Algorithms for In-context Learning.
CoRR, 2024

Decentralised, Collaborative, and Privacy-preserving Machine Learning for Multi-Hospital Data.
CoRR, 2024

Exploring Strategies for Guiding Symbolic Analysis with Machine Learning Prediction.
Proceedings of the IEEE International Conference on Software Analysis, 2024

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD.
Proceedings of the 33rd USENIX Security Symposium, 2024

The Fundamental Limits of Least-Privilege Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Auditing Private Prediction.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Memorization in Self-Supervised Learning Improves Downstream Generalization.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Confidential-DPproof: Confidential Proof of Differentially Private Training.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias.
Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, 2024

2023
Losing Less: A Loss for Differentially Private Deep Learning.
Proc. Priv. Enhancing Technol., July, 2023

Differentially Private Speaker Anonymization.
Proc. Priv. Enhancing Technol., January, 2023

Private Multi-Winner Voting for Machine Learning.
Proc. Priv. Enhancing Technol., January, 2023

Robust and Actively Secure Serverless Collaborative Learning.
CoRR, 2023

Beyond Labeling Oracles: What does it mean to steal ML models?
CoRR, 2023

LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?
CoRR, 2023

When Vision Fails: Text Attacks Against ViT and OCR.
CoRR, 2023

The Curse of Recursion: Training on Generated Data Makes Models Forget.
CoRR, 2023

Challenges towards the Next Frontier in Privacy.
CoRR, 2023

Learning with Impartiality to Walk on the Pareto Frontier of Fairness, Privacy, and Utility.
CoRR, 2023

Is Federated Learning a Practical PET Yet?
CoRR, 2023

Tubes Among Us: Analog Attack on Automatic Speaker Identification.
Proceedings of the 32nd USENIX Security Symposium, 2023

Training Private Models That Know What They Don't Know.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Robust and Actively Secure Serverless Collaborative Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Have it your way: Individualized Privacy Assignment for DP-SGD.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Confidential-PROFITT: Confidential PROof of FaIr Training of Trees.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Measuring Forgetting of Memorized Training Examples.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Proof-of-Learning is Currently More Broken Than You Think.
Proceedings of the 8th IEEE European Symposium on Security and Privacy, 2023

Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation.
Proceedings of the 8th IEEE European Symposium on Security and Privacy, 2023

When the Curious Abandon Honesty: Federated Learning Is Not Private.
Proceedings of the 8th IEEE European Symposium on Security and Privacy, 2023

Architectural Backdoors in Neural Networks.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

The Adversarial Implications of Variable-Time Inference.
Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, 2023

2022
Adversarial examples for network intrusion detection systems.
J. Comput. Secur., 2022

Learned Systems Security.
CoRR, 2022

Verifiable and Provably Secure Machine Unlearning.
CoRR, 2022

Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter Search.
CoRR, 2022

In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning.
CoRR, 2022

On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning.
CoRR, 2022

Generative Extraction of Audio Classifiers for Speaker Identification.
CoRR, 2022

p-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations.
CoRR, 2022

Efficient Adversarial Training With Data Pruning.
CoRR, 2022

Intrinsic Anomaly Detection for Multi-Variate Time Series.
CoRR, 2022

Selective Classification Via Neural Network Training Dynamics.
CoRR, 2022

Bounding Membership Inference.
CoRR, 2022

Pipe Overflow: Smashing Voice Authentication for Fun and Profit.
CoRR, 2022

On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning.
Proceedings of the 31st USENIX Security Symposium, 2022

Towards More Robust Keyword Spotting for Voice Assistants.
Proceedings of the 31st USENIX Security Symposium, 2022

Bad Characters: Imperceptible NLP Attacks.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

In Differential Privacy, There is Truth: on Vote-Histogram Leakage in Ensemble Private Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Washing The Unwashable : On The (Im)possibility of Fairwashing Detection.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Dataset Inference for Self-Supervised Models.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The Privacy Onion Effect: Memorization is Relative.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

On the Limitations of Stochastic Pre-processing Defenses.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

On the Difficulty of Defending Self-Supervised Learning against Model Extraction.
Proceedings of the International Conference on Machine Learning, 2022

Hyperparameter Tuning with Renyi Differential Privacy.
Proceedings of the Tenth International Conference on Learning Representations, 2022

A Zest of LIME: Towards Architecture-Independent Model Distances.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Increasing the Cost of Model Extraction with Calibrated Proof of Work.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Unrolling SGD: Understanding Factors Influencing Machine Unlearning.
Proceedings of the 7th IEEE European Symposium on Security and Privacy, 2022

The Role of Randomization in Trustworthy Machine Learning.
Proceedings of the 9th ACM Workshop on Moving Target Defense, 2022

2021
Interpretability in Safety-Critical FinancialTrading Systems.
CoRR, 2021

SoK: Machine Learning Governance.
CoRR, 2021

On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples.
CoRR, 2021

Entangled Watermarks as a Defense against Model Extraction.
Proceedings of the 30th USENIX Security Symposium, 2021

Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

Proof-of-Learning: Definitions and Practice.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

Machine Unlearning.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems.
Proceedings of the 42nd IEEE Symposium on Security and Privacy, 2021

Manipulating SGD with Data Ordering Attacks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Accelerating Symbolic Analysis for Android Apps.
Proceedings of the 36th IEEE/ACM International Conference on Automated Software Engineering, 2021

Markpainting: Adversarial Machine Learning meets Inpainting.
Proceedings of the 38th International Conference on Machine Learning, 2021

Label-Only Membership Inference Attacks.
Proceedings of the 38th International Conference on Machine Learning, 2021

Dataset Inference: Ownership Resolution in Machine Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021

CaPC Learning: Confidential and Private Collaborative Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021

Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings.
Proceedings of the FAccT '21: 2021 ACM Conference on Fairness, 2021

Sponge Examples: Energy-Latency Attacks on Neural Networks.
Proceedings of the IEEE European Symposium on Security and Privacy, 2021

Fourth International Workshop on Dependable and Secure Machine Learning - DSML 2021.
Proceedings of the 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops, 2021

Data-Free Model Extraction.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

Tempered Sigmoid Activations for Deep Learning with Differential Privacy.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Adversarial Examples in Constrained Domains.
CoRR, 2020

Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media.
CoRR, 2020

Entangled Watermarks as a Defense against Model Extraction.
CoRR, 2020

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping.
CoRR, 2020

High Accuracy and High Fidelity Extraction of Neural Networks.
Proceedings of the 29th USENIX Security Symposium, 2020

On the Robustness of Cooperative Multi-Agent Reinforcement Learning.
Proceedings of the 2020 IEEE Security and Privacy Workshops, 2020

Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations.
Proceedings of the 37th International Conference on Machine Learning, 2020

Thieves on Sesame Street! Model Extraction of BERT-based APIs.
Proceedings of the 8th International Conference on Learning Representations, 2020

Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAs.
Proceedings of the International Conference on Field-Programmable Technology, 2020

Third International Workshop on Dependable and Secure Machine Learning - DSML 2020.
Proceedings of the 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops, 2020

2019
How Relevant Is the Turing Test in the Age of Sophisbots?
IEEE Secur. Priv., 2019

Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications.
CoRR, 2019

Improving Differentially Private Models with Active Learning.
CoRR, 2019

High-Fidelity Extraction of Neural Network Models.
CoRR, 2019

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness.
CoRR, 2019

On Evaluating Adversarial Robustness.
CoRR, 2019

MixMatch: A Holistic Approach to Semi-Supervised Learning.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Analyzing and Improving Representations with the Soft Nearest Neighbor Loss.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
A Marauder's Map of Security and Privacy in Machine Learning.
CoRR, 2018

Adversarial Vision Challenge.
CoRR, 2018

Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning.
CoRR, 2018

Adversarial Examples that Fool both Human and Computer Vision.
CoRR, 2018

Making machine learning robust against adversarial inputs.
Commun. ACM, 2018

Adversarial Examples that Fool both Computer Vision and Time-Limited Humans.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Ensemble Adversarial Training: Attacks and Defenses.
Proceedings of the 6th International Conference on Learning Representations, 2018

Scalable Private Learning with PATE.
Proceedings of the 6th International Conference on Learning Representations, 2018

SoK: Security and Privacy in Machine Learning.
Proceedings of the 2018 IEEE European Symposium on Security and Privacy, 2018

A Marauder's Map of Security and Privacy in Machine Learning: An overview of current and future research directions for making machine learning secure and private.
Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security, 2018

Detection under Privileged Information.
Proceedings of the 2018 on Asia Conference on Computer and Communications Security, 2018

2017
On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches.
CoRR, 2017

The Space of Transferable Adversarial Examples.
CoRR, 2017

Ensemble Adversarial Training: Attacks and Defenses.
CoRR, 2017

Extending Defensive Distillation.
CoRR, 2017

On the (Statistical) Detection of Adversarial Examples.
CoRR, 2017

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data.
Proceedings of the 5th International Conference on Learning Representations, 2017

Adversarial Attacks on Neural Network Policies.
Proceedings of the 5th International Conference on Learning Representations, 2017

Adversarial Examples for Malware Detection.
Proceedings of the Computer Security - ESORICS 2017, 2017

On the Protection of Private Information in Machine Learning Systems: Two Recent Approches.
Proceedings of the 30th IEEE Computer Security Foundations Symposium, 2017

Practical Black-Box Attacks against Machine Learning.
Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 2017

2016
Machine Learning in Adversarial Settings.
IEEE Secur. Priv., 2016

Towards the Science of Security and Privacy in Machine Learning.
CoRR, 2016

Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples.
CoRR, 2016

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples.
CoRR, 2016

On the Effectiveness of Defensive Distillation.
CoRR, 2016

Adversarial Perturbations Against Deep Neural Networks for Malware Classification.
CoRR, 2016

cleverhans v0.1: an adversarial machine learning library.
CoRR, 2016

Building Better Detection with Privileged Information.
CoRR, 2016

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks.
Proceedings of the IEEE Symposium on Security and Privacy, 2016

Crafting adversarial input sequences for recurrent neural networks.
Proceedings of the 2016 IEEE Military Communications Conference, 2016

Mapping sample scenarios to operational models.
Proceedings of the 2016 IEEE Military Communications Conference, 2016

The Limitations of Deep Learning in Adversarial Settings.
Proceedings of the IEEE European Symposium on Security and Privacy, 2016

2015
Enforcing agile access control policies in relational databases using views.
Proceedings of the 34th IEEE Military Communications Conference, 2015

2014


  Loading...