Battista Biggio

According to our database1, Battista Biggio authored at least 90 papers between 2007 and 2020.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Other 

Links

Homepage:

On csauthors.net:

Bibliography

2020
Deep neural rejection against adversarial examples.
EURASIP J. Inf. Secur., 2020

Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?
CoRR, 2020

Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
CoRR, 2020

Poisoning Attacks on Algorithmic Fairness.
CoRR, 2020

Efficient Black-box Optimization of Adversarial Windows Malware with Constrained Manipulations.
CoRR, 2020

2019
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection.
IEEE Trans. Dependable Secur. Comput., 2019

Digital Investigation of PDF Files: Unveiling Traces of Embedded Malware.
IEEE Secur. Priv., 2019

Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks.
ACM Comput. Surv., 2019

secml: A Python Library for Secure and Explainable Machine Learning.
CoRR, 2019

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction.
CoRR, 2019

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.
Proceedings of the 28th USENIX Security Symposium, 2019

Towards quality assurance of software product lines with adversarial configurations.
Proceedings of the 23rd International Systems and Software Product Line Conference, 2019

Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries.
Proceedings of the Third Italian Conference on Cyber Security, 2019

Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction.
Proceedings of the 27th European Symposium on Artificial Neural Networks, 2019

Societal Issues in Machine Learning: When Learning from Data is Not Enough.
Proceedings of the 27th European Symposium on Artificial Neural Networks, 2019

Optimization and deployment of CNNs at the edge: the ALOHA experience.
Proceedings of the 16th ACM International Conference on Computing Frontiers, 2019

Poster: Attacking Malware Classifiers by Crafting Gradient-Attacks that Preserve Functionality.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

AISec'19: 12th ACM Workshop on Artificial Intelligence and Security.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

2018
Wild patterns: Ten years after the rise of adversarial machine learning.
Pattern Recognit., 2018

Towards Robust Detection of Adversarial Infection Vectors: Lessons Learned in PDF Malware.
CoRR, 2018

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks.
CoRR, 2018

Towards Adversarial Configurations for Software Product Lines.
CoRR, 2018

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning.
Proceedings of the 2018 IEEE Symposium on Security and Privacy, 2018

Architecture-aware design and implementation of CNN algorithms for embedded inference: the ALOHA project.
Proceedings of the 30th International Conference on Microelectronics, 2018

Explaining Black-box Android Malware Detection.
Proceedings of the 26th European Signal Processing Conference, 2018

Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables.
Proceedings of the 26th European Signal Processing Conference, 2018

ALOHA: an architectural-aware framework for deep learning at the edge.
Proceedings of the Workshop on INTelligent Embedded Systems Architectures and Applications, 2018

Session details: AI Security / Adversarial Machine Learning.
Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security, 2018

11th International Workshop on Artificial Intelligence and Security (AISec 2018).
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018

2017
Randomized Prediction Games for Adversarial Machine Learning.
IEEE Trans. Neural Networks Learn. Syst., 2017

Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems.
IEEE Trans. Pattern Anal. Mach. Intell., 2017

Adversarial Detection of Flash Malware: Limitations and Open Issues.
CoRR, 2017

Security Evaluation of Pattern Classifiers under Attack.
CoRR, 2017

Detection of Malicious Scripting Code Through Discriminant and Adversary-Aware API Analysis.
Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), 2017

Infinity-Norm Support Vector Machines Against Adversarial Label Contamination.
Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), 2017

Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid.
Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017

DeltaPhish: Detecting Phishing Webpages in Compromised Websites.
Proceedings of the Computer Security - ESORICS 2017, 2017

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017

10th International Workshop on Artificial Intelligence and Security (AISec 2017).
Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30, 2017

Deepsquatting: Learning-Based Typosquatting Detection at Deeper Domain Levels.
Proceedings of the AI*IA 2017 Advances in Artificial Intelligence, 2017

2016
Adversarial Feature Selection Against Evasion Attacks.
IEEE Trans. Cybern., 2016

AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack.
CoRR, 2016

Super-Sparse Learning in Similarity Spaces.
IEEE Comput. Intell. Mag., 2016

On Security and Sparsity of Linear Classifiers for Adversarial Settings.
Proceedings of the Structural, Syntactic, and Statistical Pattern Recognition, 2016

Who Are You? A Statistical Approach to Measuring User Authenticity.
Proceedings of the 23rd Annual Network and Distributed System Security Symposium, 2016

Machine Learning under Attack: Vulnerability Exploitation and Security Measures.
Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, 2016

Secure Kernel Machines against Evasion Attacks.
Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, 2016

Detecting Misuse of Google Cloud Messaging in Android Badware.
Proceedings of the 6th Workshop on Security and Privacy in Smartphones and Mobile Devices, 2016

2015
Anti-spoofing, Multimodal.
Proceedings of the Encyclopedia of Biometrics, Second Edition, 2015

Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective.
IEEE Signal Process. Mag., 2015

Data-driven journal meta-ranking in business and management.
Scientometrics, 2015

Support vector machines under adversarial label contamination.
Neurocomputing, 2015

One-and-a-Half-Class Multiple Classifier Systems for Secure Learning Against Evasion Attacks at Test Time.
Proceedings of the Multiple Classifier Systems - 12th International Workshop, 2015

Is Feature Selection Secure against Training Data Poisoning?
Proceedings of the 32nd International Conference on Machine Learning, 2015

Fast Image Classification with Reduced Multiclass Support Vector Machines.
Proceedings of the Image Analysis and Processing - ICIAP 2015, 2015

Super-Sparse Regression for Fast Age Estimation from Faces at Test Time.
Proceedings of the Image Analysis and Processing - ICIAP 2015, 2015

Sparse support faces.
Proceedings of the International Conference on Biometrics, 2015

2014
Multimodal Anti-spoofing in Biometric Recognition Systems.
Proceedings of the Handbook of Biometric Anti-Spoofing, 2014

Security Evaluation of PatternClassifiers under Attack.
IEEE Trans. Knowl. Data Eng., 2014

Pattern Recognition Systems under Attack: Design Issues and Research Challenges.
Int. J. Pattern Recognit. Artif. Intell., 2014

Security Evaluation of Support Vector Machines in Adversarial Environments.
CoRR, 2014

Poisoning Complete-Linkage Hierarchical Clustering.
Proceedings of the Structural, Syntactic, and Statistical Pattern Recognition, 2014

Poisoning behavioral malware clustering.
Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, 2014

On learning and recognition of secure patterns.
Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, 2014

2013
Evasion Attacks against Machine Learning at Test Time.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2013

Poisoning attacks to compromise face templates.
Proceedings of the International Conference on Biometrics, 2013

Pattern Recognition Systems under Attack.
Proceedings of the Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 2013

Is data clustering in adversarial settings secure?
Proceedings of the AISec'13, 2013

2012
Security evaluation of biometric authentication systems under real spoofing attacks.
IET Biom., 2012

Poisoning Adaptive Biometric Systems.
Proceedings of the Structural, Syntactic, and Statistical Pattern Recognition, 2012

Poisoning Attacks against Support Vector Machines.
Proceedings of the 29th International Conference on Machine Learning, 2012

Learning sparse kernel machines with biometric similarity functions for identity recognition.
Proceedings of the IEEE Fifth International Conference on Biometrics: Theory, 2012

2011
A survey and experimental evaluation of image spam filtering techniques.
Pattern Recognit. Lett., 2011

Microbagging Estimators: An Ensemble Approach to Distance-weighted Classifiers.
Proceedings of the 3rd Asian Conference on Machine Learning, 2011

Support Vector Machines Under Adversarial Label Noise.
Proceedings of the 3rd Asian Conference on Machine Learning, 2011

Design of robust classifiers for adversarial environments.
Proceedings of the IEEE International Conference on Systems, 2011

Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks.
Proceedings of the Multiple Classifier Systems - 10th International Workshop, 2011

Robustness of multi-modal biometric verification systems under realistic spoofing attacks.
Proceedings of the 2011 IEEE International Joint Conference on Biometrics, 2011

Understanding the risk factors of learning in adversarial environments.
Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, 2011

Robustness of multi-modal biometric systems under realistic spoof attacks against all traits.
Proceedings of the IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications, 2011

2010
Multiple classifier systems for robust classifier design in adversarial environments.
Int. J. Machine Learning & Cybernetics, 2010

Multiple Classifier Systems under Attack.
Proceedings of the Multiple Classifier Systems, 9th International Workshop, 2010

2009
Evade Hard Multiple Classifier Systems.
Proceedings of the Applications of Supervised and Unsupervised Ensemble Methods, 2009

Bayesian Linear Combination of Neural Networks.
Proceedings of the Innovations in Neural Information Paradigms and Applications, 2009

Multiple Classifier Systems for Adversarial Classification Tasks.
Proceedings of the Multiple Classifier Systems, 8th International Workshop, 2009

2008
Adversarial Pattern Classification Using Multiple Classifiers and Randomisation.
Proceedings of the Structural, 2008

Improving Image Spam Filtering Using Image Text Features.
Proceedings of the CEAS 2008, 2008

2007
Bayesian Analysis of Linear Combiners.
Proceedings of the Multiple Classifier Systems, 7th International Workshop, 2007

Image Spam Filtering Using Visual Information.
Proceedings of the 14th International Conference on Image Analysis and Processing (ICIAP 2007), 2007

Image Spam Filtering by Content Obscuring Detection.
Proceedings of the CEAS 2007, 2007


  Loading...