Emily Wenger

Orcid: 0009-0006-3346-8226

According to our database1, Emily Wenger authored at least 22 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Data Isotopes for Data Provenance in DNNs.
Proc. Priv. Enhancing Technol., January, 2024

SALSA FRESCA: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors.
IACR Cryptol. ePrint Arch., 2024

The cool and the cruel: separating hard parts of LWE secrets.
IACR Cryptol. ePrint Arch., 2024

2023
SALSA PICANTE: a machine learning attack on LWE with binary secrets.
IACR Cryptol. ePrint Arch., 2023

SALSA VERDE: a machine learning attack on Learning with Errors with sparse small secrets.
IACR Cryptol. ePrint Arch., 2023

Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models.
Proceedings of the 32nd USENIX Security Symposium, 2023

SoK: Anti-Facial Recognition Technology.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

SALSA VERDE: a machine learning attack on LWE with sparse small secrets.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

SalsaPicante: A Machine Learning Attack on LWE with Binary Secrets.
Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023

2022
SALSA: Attacking Lattice Cryptography with Transformers.
IACR Cryptol. ePrint Arch., 2022

Natural Backdoor Datasets.
CoRR, 2022

Assessing Privacy Risks from Feature Vector Reconstruction Attacks.
CoRR, 2022

Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks.
Proceedings of the 31st USENIX Security Symposium, 2022

Finding Naturally Occurring Physical Backdoors in Image Datasets.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models.
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022

2021
Backdoor Attacks Against Deep Learning Systems in the Physical World.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World.
Proceedings of the CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15, 2021

2020
Backdoor Attacks on Facial Recognition in the Physical World.
CoRR, 2020

Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks.
CoRR, 2020

Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models.
CoRR, 2020

Fawkes: Protecting Privacy against Unauthorized Deep Learning Models.
Proceedings of the 29th USENIX Security Symposium, 2020

Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks.
Proceedings of the CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020


  Loading...