Ambra Demontis

Orcid: 0000-0001-9318-6913

According to our database1, Ambra Demontis authored at least 44 papers between 2015 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Machine Learning Security Against Data Poisoning: Are We There Yet?
Computer, March, 2024

2023
Hardening RGB-D object recognition systems against adversarial patch attacks.
Inf. Sci., December, 2023

ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches.
Pattern Recognit., 2023

Stateful detection of adversarial reprogramming.
Inf. Sci., 2023

Why adversarial reprogramming works, when it fails, and how to tell the difference.
Inf. Sci., 2023

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning.
ACM Comput. Surv., 2023

Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization.
CoRR, 2023

The Threat of Offensive AI to Organizations.
Comput. Secur., 2023

BAARD: Blocking Adversarial Examples by Testing for Applicability, Reliability and Decidability.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2023

Cybersecurity and AI: The PRALab Research Experience.
Proceedings of the Italia Intelligenza Artificiale, 2023

AI Security and Safety: The PRALab Research Experience.
Proceedings of the Italia Intelligenza Artificiale, 2023

Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks.
Proceedings of the International Conference on Machine Learning and Cybernetics, 2023

Detecting Attacks Against Deep Reinforcement Learning for Autonomous Driving.
Proceedings of the International Conference on Machine Learning and Cybernetics, 2023

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training.
Proceedings of the Image Analysis and Processing - ICIAP 2023, 2023

2022
A Hybrid Training-Time and Run-Time Defense Against Adversarial Attacks in Modulation Classification.
IEEE Wirel. Commun. Lett., 2022

secml: Secure and explainable machine learning in Python.
SoftwareX, 2022

Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers.
IEEE Trans. Pattern Anal. Mach. Intell., 2022

Do gradient-based explanations tell anything about adversarial robustness to android malware?
Int. J. Mach. Learn. Cybern., 2022

A Survey on Reinforcement Learning Security with Application to Autonomous Driving.
CoRR, 2022

Energy-Latency Attacks via Sponge Poisoning.
CoRR, 2022

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

AISec '22: 15th ACM Workshop on Artificial Intelligence and Security.
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022

2021
The Threat of Offensive AI to Organizations.
CoRR, 2021

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.
CoRR, 2021

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions.
CoRR, 2021

Intriguing Usage of Applicability Domain: Lessons from Cheminformatics Applied to Adversarial Learning.
CoRR, 2021

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Proceedings of the International Joint Conference on Neural Networks, 2021

Session details: Session 2B: Machine Learning for Cybersecurity.
Proceedings of the AISec@CCS 2021: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, 2021

2020
Deep neural rejection against adversarial examples.
EURASIP J. Inf. Secur., 2020

Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?
CoRR, 2020

Adversarial Detection of Flash Malware: Limitations and Open Issues.
Comput. Secur., 2020

AISec'20: 13th Workshop on Artificial Intelligence and Security.
Proceedings of the CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020

2019
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection.
IEEE Trans. Dependable Secur. Comput., 2019

secml: A Python Library for Secure and Explainable Machine Learning.
CoRR, 2019

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.
Proceedings of the 28th USENIX Security Symposium, 2019

2018
On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks.
CoRR, 2018

Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables.
Proceedings of the 26th European Signal Processing Conference, 2018

2017
Infinity-Norm Support Vector Machines Against Adversarial Label Contamination.
Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), 2017

Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid.
Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017

2016
Super-Sparse Learning in Similarity Spaces.
IEEE Comput. Intell. Mag., 2016

On Security and Sparsity of Linear Classifiers for Adversarial Settings.
Proceedings of the Structural, Syntactic, and Statistical Pattern Recognition, 2016

Secure Kernel Machines against Evasion Attacks.
Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, 2016

2015
Super-Sparse Regression for Fast Age Estimation from Faces at Test Time.
Proceedings of the Image Analysis and Processing - ICIAP 2015, 2015


  Loading...