Sebastian Lapuschkin

Orcid: 0000-0002-0762-7258

Affiliations:
  • Fraunhofer Heinrich Hertz Institute, Berlin, Germany


According to our database1, Sebastian Lapuschkin authored at least 66 papers between 2014 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
AudioMNIST: Exploring Explainable Artificial Intelligence for audio analysis on a simple benchmark.
J. Frankl. Inst., 2024

DualView: Data Attribution from the Dual Perspective.
CoRR, 2024

AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers.
CoRR, 2024

Explaining Predictive Uncertainty by Exposing Second-Order Effects.
CoRR, 2024

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test.
CoRR, 2024

From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
From attribution maps to human-understandable explanations through Concept Relevance Propagation.
Nat. Mac. Intell., September, 2023

Beyond explaining: Opportunities and challenges of XAI-based model improvement.
Inf. Fusion, April, 2023

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond.
J. Mach. Learn. Res., 2023

Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations.
CoRR, 2023

Generative Fractional Diffusion Models.
CoRR, 2023

Layer-wise Feedback Propagation.
CoRR, 2023

From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space.
CoRR, 2023

XAI-based Comparison of Input Representations for Audio Event Classification.
CoRR, 2023

Explainable AI for Time Series via Virtual Inspection Layers.
CoRR, 2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus.
CoRR, 2023

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models.
Proceedings of the Medical Image Computing and Computer Assisted Intervention - MICCAI 2023, 2023

Human-Centered Evaluation of XAI Methods.
Proceedings of the IEEE International Conference on Data Mining, 2023

Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models.
Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, 2023

Optimizing Explanations by Network Canonization and Hyperparameter Search.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

XAI-based Comparison of Audio Event Classifiers with different Input Representations.
Proceedings of the 20th International Conference on Content-based Multimedia Indexing, 2023

2022
Towards the interpretability of deep learning models for multi-modal neuroimaging: Finding structural changes of the ageing brain.
NeuroImage, 2022

Explain and improve: LRP-inference fine-tuning for image captioning models.
Inf. Fusion, 2022

Finding and removing Clever Hans: Using explanation methods to debug and improve deep models.
Inf. Fusion, 2022

Explaining Machine Learning Models for Clinical Gait Analysis.
ACM Trans. Comput. Heal., 2022

Explaining machine learning models for age classification in human gait analysis.
CoRR, 2022

Explaining automated gender classification of human gait.
CoRR, 2022

From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation.
CoRR, 2022

But that's not why: Inference adjustment by interactive prototype deselection.
CoRR, 2022

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations.
CoRR, 2022

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging.
CoRR, 2022

Measurably Stronger Explanation Reliability Via Model Canonization.
Proceedings of the 2022 IEEE International Conference on Image Processing, 2022

Selection of XAI Methods Matters: Evaluation of Feature Attribution Methods for Oculomotoric Biometric Identification.
Proceedings of The 1st Gaze Meets ML workshop, 2022

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI.
Proceedings of the Machine Learning and Knowledge Extraction, 2022

2021
Pruning by explaining: A novel criterion for deep neural network pruning.
Pattern Recognit., 2021

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications.
Proc. IEEE, 2021

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy.
CoRR, 2021

2020
Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond.
CoRR, 2020

Understanding Image Captioning Models beyond Visualizing Attention.
CoRR, 2020

Towards Best Practice in Explaining Neural Network Decisions with LRP.
Proceedings of the 2020 International Joint Conference on Neural Networks, 2020

Explanation-Guided Training for Cross-Domain Few-Shot Classification.
Proceedings of the 25th International Conference on Pattern Recognition, 2020

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution.
Proceedings of the 25th International Conference on Pattern Recognition, 2020

ECQ<sup> x</sup>: Explainability-Driven Quantization for Low-Bit and Sparse DNNs.
Proceedings of the xxAI - Beyond Explainable AI, 2020

2019
Layer-Wise Relevance Propagation: An Overview.
Proceedings of the Explainable AI: Interpreting, 2019

Opening the machine learning black box with Layer-wise Relevance Propagation.
PhD thesis, 2019

iNNvestigate Neural Networks!
J. Mach. Learn. Res., 2019

Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed.
CoRR, 2019

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning.
CoRR, 2019

On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence.
CoRR, 2019

Resolving challenges in deep learning-based analyses of histopathological images using explanation methods.
CoRR, 2019

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn.
CoRR, 2019

2018
What is Unique in Individual Gait Patterns? Understanding and Interpreting Deep Learning in Gait Analysis.
CoRR, 2018

Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals.
CoRR, 2018

2017
Evaluating the Visualization of What a Deep Neural Network Has Learned.
IEEE Trans. Neural Networks Learn. Syst., 2017

Explaining nonlinear classification decisions with deep Taylor decomposition.
Pattern Recognit., 2017

Understanding and Comparing Deep Neural Networks for Age and Gender Classification.
Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017

Interpretable human action recognition in compressed domain.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

2016
The LRP Toolbox for Artificial Neural Networks.
J. Mach. Learn. Res., 2016

Interpretable Deep Neural Networks for Single-Trial EEG Classification.
CoRR, 2016

Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation.
CoRR, 2016

Controlling explanatory heatmap resolution and semantics via decomposition depth.
Proceedings of the 2016 IEEE International Conference on Image Processing, 2016

Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers.
Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2016, 2016

Analyzing Classifiers: Fisher Vectors and Deep Neural Networks.
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016

2014
Detecting Behavioral and Structural Anomalies in MediaCloud Applications.
CoRR, 2014


  Loading...