Philip Sperl

Orcid: 0000-0002-7901-7168

According to our database1, Philip Sperl authored at least 20 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Imbalance in Regression Datasets.
CoRR, 2024

A New Approach to Voice Authenticity.
CoRR, 2024

MLAAD: The Multi-Language Audio Anti-Spoofing Dataset.
CoRR, 2024

2023
Physical Adversarial Examples for Multi-Camera Systems.
CoRR, 2023

Complex-valued neural networks for voice anti-spoofing.
CoRR, 2023

Shortcut Detection with Variational Autoencoders.
CoRR, 2023

Protecting Publicly Available Data With Machine Learning Shortcuts.
Proceedings of the 34th British Machine Vision Conference 2023, 2023

2022
Visualizing Automatic Speech Recognition - Means for a Better Understanding?
CoRR, 2022

R2-AD2: Detecting Anomalies by Analysing the Raw Gradient.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2022

Double-Adversarial Activation Anomaly Detection: Adversarial Autoencoders are Anomaly Generators.
Proceedings of the International Joint Conference on Neural Networks, 2022

Anomaly Detection by Recombining Gated Unsupervised Experts.
Proceedings of the International Joint Conference on Neural Networks, 2022

Assessing the Impact of Transformations on Physical Adversarial Attacks.
Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, 2022

2021
Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning.
CoRR, 2021

DA3G: Detecting Adversarial Attacks by Analysing Gradients.
Proceedings of the Computer Security - ESORICS 2021, 2021

2020
Optimizing Information Loss Towards Robust Neural Networks.
CoRR, 2020

A<sup>3</sup>: Activation Anomaly Analysis.
CoRR, 2020

Activation Anomaly Analysis.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2020

DLA: Dense-Layer-Analysis for Adversarial Example Detection.
Proceedings of the IEEE European Symposium on Security and Privacy, 2020

2019
DLA: Dense-Layer-Analysis for Adversarial Example Detection.
CoRR, 2019

Side-Channel Aware Fuzzing.
Proceedings of the Computer Security - ESORICS 2019, 2019


  Loading...