Ahmed Salem

Affiliations:
  • CISPA Helmholtz Center for Information Security, Saarbrücken, Germany


According to our database1, Ahmed Salem authored at least 16 papers between 2018 and 2023.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Two-in-One: A Model Hijacking Attack Against Text Generation Models.
Proceedings of the 32nd USENIX Security Symposium, 2023

UnGANable: Defending Against GAN-based Face Manipulation.
Proceedings of the 32nd USENIX Security Symposium, 2023

2022
Adversarial inference and manipulation of machine learning models.
PhD thesis, 2022

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.
Proceedings of the 31st USENIX Security Symposium, 2022

Get a Model! Model Hijacking Attack Against Machine Learning Models.
Proceedings of the 29th Annual Network and Distributed System Security Symposium, 2022

Dynamic Backdoor Attacks Against Machine Learning Models.
Proceedings of the 7th IEEE European Symposium on Security and Privacy, 2022

2021
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2021

BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements.
Proceedings of the ACSAC '21: Annual Computer Security Applications Conference, Virtual Event, USA, December 6, 2021

2020
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks.
CoRR, 2020

BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models.
CoRR, 2020

BadNL: Backdoor Attacks Against NLP Models.
CoRR, 2020

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning.
Proceedings of the 29th USENIX Security Symposium, 2020

2019
Privacy-Preserving Similar Patient Queries for Combined Biomedical Data.
Proc. Priv. Enhancing Technol., 2019

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models.
Proceedings of the 26th Annual Network and Distributed System Security Symposium, 2019

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models.
CoRR, 2018


  Loading...