Battista Biggio

Orcid: 0000-0001-7752-509X

Affiliations:
  • University of Cagliari, Italy


According to our database1, Battista Biggio authored at least 147 papers between 2007 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Machine Learning Security Against Data Poisoning: Are We There Yet?
Computer, March, 2024

Rethinking data augmentation for adversarial robustness.
Inf. Sci., January, 2024

Living-off-The-Land Reverse-Shell Detection by Informed Data Augmentation.
CoRR, 2024

Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates.
CoRR, 2024

σ-zero: Gradient-based Optimization of 𝓁<sub>0</sub>-norm Adversarial Examples.
CoRR, 2024

When Your AI Becomes a Target: AI Security Incidents and Best Practices.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Hardening RGB-D object recognition systems against adversarial patch attacks.
Inf. Sci., December, 2023

Machine Learning Security in Industry: A Quantitative Survey.
IEEE Trans. Inf. Forensics Secur., 2023

ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches.
Pattern Recognit., 2023

Stateful detection of adversarial reprogramming.
Inf. Sci., 2023

Why adversarial reprogramming works, when it fails, and how to tell the difference.
Inf. Sci., 2023

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning.
ACM Comput. Surv., 2023

Nebula: Self-Attention for Dynamic Malware Analysis.
CoRR, 2023

Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization.
CoRR, 2023

Adversarial ModSecurity: Countering Adversarial SQL Injections with Robust Machine Learning.
CoRR, 2023

The Threat of Offensive AI to Organizations.
Comput. Secur., 2023

Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023

Cybersecurity and AI: The PRALab Research Experience.
Proceedings of the Italia Intelligenza Artificiale, 2023

AI Security and Safety: The PRALab Research Experience.
Proceedings of the Italia Intelligenza Artificiale, 2023

Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks.
Proceedings of the International Conference on Machine Learning and Cybernetics, 2023

Detecting Attacks Against Deep Reinforcement Learning for Autonomous Driving.
Proceedings of the International Conference on Machine Learning and Cybernetics, 2023

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training.
Proceedings of the Image Analysis and Processing - ICIAP 2023, 2023

Adversarial Attacks Against Uncertainty Quantification.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors.
Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, 2023

2022
Security of Machine Learning (Dagstuhl Seminar 22281).
Dagstuhl Reports, July, 2022

secml: Secure and explainable machine learning in Python.
SoftwareX, 2022

Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers.
IEEE Trans. Pattern Anal. Mach. Intell., 2022

Do gradient-based explanations tell anything about adversarial robustness to android malware?
Int. J. Mach. Learn. Cybern., 2022

Towards learning trustworthily, automatically, and with guarantees on graphs: An overview.
Neurocomputing, 2022

FADER: Fast adversarial example rejection.
Neurocomputing, 2022

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware.
IEEE Secur. Priv., 2022

A Survey on Reinforcement Learning Security with Application to Autonomous Driving.
CoRR, 2022

"Why do so?" - A Practical Perspective on Machine Learning Security.
CoRR, 2022

Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation.
CoRR, 2022

Energy-Latency Attacks via Sponge Poisoning.
CoRR, 2022

Practical Evaluation of Poisoning Attacks on Online Anomaly Detectors in Industrial Control Systems.
Comput. Secur., 2022

Backdoor smoothing: Demystifying backdoor attacks on deep neural networks.
Comput. Secur., 2022

Industrial practitioners' mental models of adversarial machine learning.
Proceedings of the Eighteenth Symposium on Usable Privacy and Security, 2022

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Explaining Machine Learning DGA Detectors from DNS Traffic Data.
Proceedings of the Italian Conference on Cybersecurity (ITASEC 2022), 2022

Robust Machine Learning for Malware Detection over Time.
Proceedings of the Italian Conference on Cybersecurity (ITASEC 2022), 2022

Tessellation-Filtering ReLU Neural Networks.
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022

Explainability-based Debugging of Machine Learning for Vulnerability Discovery.
Proceedings of the ARES 2022: The 17th International Conference on Availability, Reliability and Security, Vienna,Austria, August 23, 2022

2021
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection.
ACM Trans. Priv. Secur., 2021

Functionality-Preserving Black-Box Optimization of Adversarial Windows Malware.
IEEE Trans. Inf. Forensics Secur., 2021

Empirical assessment of generating adversarial configurations for software product lines.
Empir. Softw. Eng., 2021

The Threat of Offensive AI to Organizations.
CoRR, 2021

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples.
CoRR, 2021

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions.
CoRR, 2021

secml-malware: A Python Library for Adversarial Robustness Evaluation of Windows Malware Classifiers.
CoRR, 2021

Adversarial Machine Learning: Attacks From Laboratories to the Real World.
Computer, 2021

Poisoning attacks on cyber attack detectors for industrial control systems.
Proceedings of the SAC '21: The 36th ACM/SIGAPP Symposium on Applied Computing, 2021

Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Proceedings of the International Joint Conference on Neural Networks, 2021

Slope: A First-order Approach for Measuring Gradient Obfuscation.
Proceedings of the 29th European Symposium on Artificial Neural Networks, 2021

Complex Data: Learning Trustworthily, Automatically, and with Guarantees.
Proceedings of the 29th European Symposium on Artificial Neural Networks, 2021

Task-Specific Automation in Deep Learning Processes.
Proceedings of the Database and Expert Systems Applications - DEXA 2021 Workshops, 2021

2020
Deep neural rejection against adversarial examples.
EURASIP J. Inf. Secur., 2020

Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?
CoRR, 2020

Efficient Black-box Optimization of Adversarial Windows Malware with Constrained Manipulations.
CoRR, 2020

Adversarial Detection of Flash Malware: Limitations and Open Issues.
Comput. Secur., 2020

Poisoning Attacks on Algorithmic Fairness.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2020

2019
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection.
IEEE Trans. Dependable Secur. Comput., 2019

Digital Investigation of PDF Files: Unveiling Traces of Embedded Malware.
IEEE Secur. Priv., 2019

Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks.
ACM Comput. Surv., 2019

secml: A Python Library for Secure and Explainable Machine Learning.
CoRR, 2019

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction.
CoRR, 2019

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.
Proceedings of the 28th USENIX Security Symposium, 2019

Towards quality assurance of software product lines with adversarial configurations.
Proceedings of the 23rd International Systems and Software Product Line Conference, 2019

Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries.
Proceedings of the Third Italian Conference on Cyber Security, 2019

Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction.
Proceedings of the 27th European Symposium on Artificial Neural Networks, 2019

Societal Issues in Machine Learning: When Learning from Data is Not Enough.
Proceedings of the 27th European Symposium on Artificial Neural Networks, 2019

Optimization and deployment of CNNs at the edge: the ALOHA experience.
Proceedings of the 16th ACM International Conference on Computing Frontiers, 2019

Poster: Attacking Malware Classifiers by Crafting Gradient-Attacks that Preserve Functionality.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

AISec'19: 12th ACM Workshop on Artificial Intelligence and Security.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

2018
Wild patterns: Ten years after the rise of adversarial machine learning.
Pattern Recognit., 2018

Towards Robust Detection of Adversarial Infection Vectors: Lessons Learned in PDF Malware.
CoRR, 2018

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks.
CoRR, 2018

Towards Adversarial Configurations for Software Product Lines.
CoRR, 2018

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning.
Proceedings of the 2018 IEEE Symposium on Security and Privacy, 2018

Architecture-aware design and implementation of CNN algorithms for embedded inference: the ALOHA project.
Proceedings of the 30th International Conference on Microelectronics, 2018

Explaining Black-box Android Malware Detection.
Proceedings of the 26th European Signal Processing Conference, 2018

Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables.
Proceedings of the 26th European Signal Processing Conference, 2018

ALOHA: an architectural-aware framework for deep learning at the edge.
Proceedings of the Workshop on INTelligent Embedded Systems Architectures and Applications, 2018

Session details: AI Security / Adversarial Machine Learning.
Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security, 2018

11th International Workshop on Artificial Intelligence and Security (AISec 2018).
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018

2017
Randomized Prediction Games for Adversarial Machine Learning.
IEEE Trans. Neural Networks Learn. Syst., 2017

Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems.
IEEE Trans. Pattern Anal. Mach. Intell., 2017

Adversarial Detection of Flash Malware: Limitations and Open Issues.
CoRR, 2017

Security Evaluation of Pattern Classifiers under Attack.
CoRR, 2017

Detection of Malicious Scripting Code Through Discriminant and Adversary-Aware API Analysis.
Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), 2017

Infinity-Norm Support Vector Machines Against Adversarial Label Contamination.
Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), 2017

Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid.
Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017

DeltaPhish: Detecting Phishing Webpages in Compromised Websites.
Proceedings of the Computer Security - ESORICS 2017, 2017

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017

10th International Workshop on Artificial Intelligence and Security (AISec 2017).
Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017

Deepsquatting: Learning-Based Typosquatting Detection at Deeper Domain Levels.
Proceedings of the AI*IA 2017 Advances in Artificial Intelligence, 2017

2016
Adversarial Feature Selection Against Evasion Attacks.
IEEE Trans. Cybern., 2016

AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack.
CoRR, 2016

Super-Sparse Learning in Similarity Spaces.
IEEE Comput. Intell. Mag., 2016

On Security and Sparsity of Linear Classifiers for Adversarial Settings.
Proceedings of the Structural, Syntactic, and Statistical Pattern Recognition, 2016

Who Are You? A Statistical Approach to Measuring User Authenticity.
Proceedings of the 23rd Annual Network and Distributed System Security Symposium, 2016

Machine Learning under Attack: Vulnerability Exploitation and Security Measures.
Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, 2016

Secure Kernel Machines against Evasion Attacks.
Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, 2016

Detecting Misuse of Google Cloud Messaging in Android Badware.
Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices, 2016

2015
Anti-spoofing, Multimodal.
Proceedings of the Encyclopedia of Biometrics, Second Edition, 2015

Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective.
IEEE Signal Process. Mag., 2015

Data-driven journal meta-ranking in business and management.
Scientometrics, 2015

Support vector machines under adversarial label contamination.
Neurocomputing, 2015

One-and-a-Half-Class Multiple Classifier Systems for Secure Learning Against Evasion Attacks at Test Time.
Proceedings of the Multiple Classifier Systems - 12th International Workshop, 2015

Is Feature Selection Secure against Training Data Poisoning?
Proceedings of the 32nd International Conference on Machine Learning, 2015

Fast Image Classification with Reduced Multiclass Support Vector Machines.
Proceedings of the Image Analysis and Processing - ICIAP 2015, 2015

Super-Sparse Regression for Fast Age Estimation from Faces at Test Time.
Proceedings of the Image Analysis and Processing - ICIAP 2015, 2015

Sparse support faces.
Proceedings of the International Conference on Biometrics, 2015

2014
Multimodal Anti-spoofing in Biometric Recognition Systems.
Proceedings of the Handbook of Biometric Anti-Spoofing, 2014

Security Evaluation of PatternClassifiers under Attack.
IEEE Trans. Knowl. Data Eng., 2014

Pattern Recognition Systems under Attack: Design Issues and Research Challenges.
Int. J. Pattern Recognit. Artif. Intell., 2014

Security Evaluation of Support Vector Machines in Adversarial Environments.
CoRR, 2014

Poisoning Complete-Linkage Hierarchical Clustering.
Proceedings of the Structural, Syntactic, and Statistical Pattern Recognition, 2014

Poisoning behavioral malware clustering.
Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, 2014

On learning and recognition of secure patterns.
Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, 2014

2013
Evasion Attacks against Machine Learning at Test Time.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2013

Poisoning attacks to compromise face templates.
Proceedings of the International Conference on Biometrics, 2013

Pattern Recognition Systems under Attack.
Proceedings of the Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 2013

Is data clustering in adversarial settings secure?
Proceedings of the AISec'13, 2013

2012
Security evaluation of biometric authentication systems under real spoofing attacks.
IET Biom., 2012

Poisoning Adaptive Biometric Systems.
Proceedings of the Structural, Syntactic, and Statistical Pattern Recognition, 2012

Poisoning Attacks against Support Vector Machines.
Proceedings of the 29th International Conference on Machine Learning, 2012

Learning sparse kernel machines with biometric similarity functions for identity recognition.
Proceedings of the IEEE Fifth International Conference on Biometrics: Theory, 2012

2011
A survey and experimental evaluation of image spam filtering techniques.
Pattern Recognit. Lett., 2011

Microbagging Estimators: An Ensemble Approach to Distance-weighted Classifiers.
Proceedings of the 3rd Asian Conference on Machine Learning, 2011

Support Vector Machines Under Adversarial Label Noise.
Proceedings of the 3rd Asian Conference on Machine Learning, 2011

Design of robust classifiers for adversarial environments.
Proceedings of the IEEE International Conference on Systems, 2011

Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks.
Proceedings of the Multiple Classifier Systems - 10th International Workshop, 2011

Robustness of multi-modal biometric verification systems under realistic spoofing attacks.
Proceedings of the 2011 IEEE International Joint Conference on Biometrics, 2011

Understanding the risk factors of learning in adversarial environments.
Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, 2011

Robustness of multi-modal biometric systems under realistic spoof attacks against all traits.
Proceedings of the IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications, 2011

2010
Multiple classifier systems for robust classifier design in adversarial environments.
Int. J. Mach. Learn. Cybern., 2010

Multiple Classifier Systems under Attack.
Proceedings of the Multiple Classifier Systems, 9th International Workshop, 2010

2009
Evade Hard Multiple Classifier Systems.
Proceedings of the Applications of Supervised and Unsupervised Ensemble Methods, 2009

Bayesian Linear Combination of Neural Networks.
Proceedings of the Innovations in Neural Information Paradigms and Applications, 2009

Multiple Classifier Systems for Adversarial Classification Tasks.
Proceedings of the Multiple Classifier Systems, 8th International Workshop, 2009

2008
Adversarial Pattern Classification Using Multiple Classifiers and Randomisation.
Proceedings of the Structural, 2008

Improving Image Spam Filtering Using Image Text Features.
Proceedings of the CEAS 2008, 2008

2007
Bayesian Analysis of Linear Combiners.
Proceedings of the Multiple Classifier Systems, 7th International Workshop, 2007

Image Spam Filtering Using Visual Information.
Proceedings of the 14th International Conference on Image Analysis and Processing (ICIAP 2007), 2007

Image Spam Filtering by Content Obscuring Detection.
Proceedings of the CEAS 2007, 2007


  Loading...