Luca Demetrio

Orcid: 0000-0001-5104-1476

  • University of Genova, Italy

According to our database1, Luca Demetrio authored at least 38 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.



In proceedings 
PhD thesis 


Online presence:



Nebula: Self-Attention for Dynamic Malware Analysis.
IEEE Trans. Inf. Forensics Secur., 2024

ModSec-Learn: Boosting ModSecurity with Machine Learning.
CoRR, 2024

Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis.
CoRR, 2024

A New Formulation for Zeroth-Order Optimization of Adversarial EXEmples in Malware Detection.
CoRR, 2024

SLIFER: Investigating Performance and Robustness of Malware Detection Pipelines.
CoRR, 2024

Updating Windows Malware Detectors: Balancing Robustness and Regression against Adversarial EXEmples.
CoRR, 2024

Certified Adversarial Robustness of Machine Learning-based Malware Detectors via (De)Randomized Smoothing.
CoRR, 2024

AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples.
CoRR, 2024

Living-off-The-Land Reverse-Shell Detection by Informed Data Augmentation.
CoRR, 2024

Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates.
CoRR, 2024

Hardening RGB-D object recognition systems against adversarial patch attacks.
Inf. Sci., December, 2023

ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches.
Pattern Recognit., 2023

Adversarial ModSecurity: Countering Adversarial SQL Injections with Robust Machine Learning.
CoRR, 2023

Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023

Cybersecurity and AI: The PRALab Research Experience.
Proceedings of the Italia Intelligenza Artificiale, 2023

AI Security and Safety: The PRALab Research Experience.
Proceedings of the Italia Intelligenza Artificiale, 2023

Detecting Attacks Against Deep Reinforcement Learning for Autonomous Driving.
Proceedings of the International Conference on Machine Learning and Cybernetics, 2023

Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors.
Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, 2023

secml: Secure and explainable machine learning in Python.
SoftwareX, 2022

Towards learning trustworthily, automatically, and with guarantees on graphs: An overview.
Neurocomputing, 2022

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware.
IEEE Secur. Priv., 2022

A Survey on Reinforcement Learning Security with Application to Autonomous Driving.
CoRR, 2022

Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation.
CoRR, 2022

Practical Evaluation of Poisoning Attacks on Online Anomaly Detectors in Industrial Control Systems.
Comput. Secur., 2022

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Explaining Machine Learning DGA Detectors from DNS Traffic Data.
Proceedings of the Italian Conference on Cybersecurity (ITASEC 2022), 2022

Robust Machine Learning for Malware Detection over Time.
Proceedings of the Italian Conference on Cybersecurity (ITASEC 2022), 2022

Formalizing evasion attacks against machine learning security detectors.
PhD thesis, 2021

Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection.
ACM Trans. Priv. Secur., 2021

Functionality-Preserving Black-Box Optimization of Adversarial Windows Malware.
IEEE Trans. Inf. Forensics Secur., 2021

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.
CoRR, 2021

secml-malware: A Python Library for Adversarial Robustness Evaluation of Windows Malware Classifiers.
CoRR, 2021

Slope: A First-order Approach for Measuring Gradient Obfuscation.
Proceedings of the 29th European Symposium on Artificial Neural Networks, 2021

WAF-A-MoLE: An adversarial tool for assessing ML-based WAFs.
SoftwareX, 2020

Efficient Black-box Optimization of Adversarial Windows Malware with Constrained Manipulations.
CoRR, 2020

WAF-A-MoLE: evading web application firewalls through adversarial machine learning.
Proceedings of the SAC '20: The 35th ACM/SIGAPP Symposium on Applied Computing, online event, [Brno, Czech Republic], March 30, 2020

Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries.
Proceedings of the Third Italian Conference on Cyber Security, 2019

ZenHackAdemy: Ethical Hacking @ DIBRIS.
Proceedings of the 11th International Conference on Computer Supported Education, 2019