Sebastian Lapuschkin

Orcid: 0000-0002-0762-7258

Affiliations:
  • Fraunhofer Heinrich Hertz Institute, Berlin, Germany


According to our database1, Sebastian Lapuschkin authored at least 93 papers between 2014 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Leveraging Influence Functions for Resampling Data in Physics-Informed Neural Networks.
CoRR, June, 2025

See What I Mean? CUE: A Cognitive Model of Understanding Explanations.
CoRR, June, 2025

Attribution-guided Pruning for Compression, Circuit Discovery, and Targeted Correction in LLMs.
CoRR, June, 2025

Deep Learning-based Multi Project InP Wafer Simulation for Unsupervised Surface Defect Detection.
CoRR, June, 2025

Relevance-driven Input Dropout: an Explanation-guided Regularization Technique.
CoRR, May, 2025

From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance.
CoRR, May, 2025

The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation.
CoRR, May, 2025

Prisma: An Open Source Toolkit for Mechanistic Interpretability in Vision and Video.
CoRR, April, 2025

ASIDE: Architectural Separation of Instructions and Data in Language Models.
CoRR, March, 2025

Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations.
CoRR, March, 2025

A Close Look at Decomposition-based XAI-Methods for Transformer Language Models.
CoRR, February, 2025

Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data.
CoRR, January, 2025

Mechanistic understanding and validation of large AI models with SemanticLens.
CoRR, January, 2025

Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation.
Trans. Mach. Learn. Res., 2025

Evaluating Interpretable Methods via Geometric Alignment of Functional Distortions.
Trans. Mach. Learn. Res., 2025

Explaining predictive uncertainty by exposing second-order effects.
Pattern Recognit., 2025

Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

FADE: Why Bad Descriptions Happen to Good Features.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
Explainable AI for time series via Virtual Inspection Layers.
Pattern Recognit., 2024

AudioMNIST: Exploring Explainable Artificial Intelligence for audio analysis on a simple benchmark.
J. Frankl. Inst., 2024

Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond.
CoRR, 2024

PINNfluence: Influence Functions for Physics-Informed Neural Networks.
CoRR, 2024

DualView: Data Attribution from the Dual Perspective.
CoRR, 2024

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test.
CoRR, 2024

PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits.
Proceedings of the 3rd Explainable AI for Computer Vision (XAI4CV) Workshop, 2024

Explainable Concept Mappings of MRI: Revealing the Mechanisms Underlying Deep Learning-Based Brain Disease Classification.
Proceedings of the Explainable Artificial Intelligence, 2024

A Fresh Look at Sanity Checks for Saliency Maps.
Proceedings of the Explainable Artificial Intelligence, 2024

Generative Fractional Diffusion Models.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

CoSy: Evaluating Textual Explanations of Neurons.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization.
Proceedings of the Computer Vision - ECCV 2024 Workshops, 2024

Pruning by Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers.
Proceedings of the Computer Vision - ECCV 2024 Workshops, 2024

Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
From attribution maps to human-understandable explanations through Concept Relevance Propagation.
Nat. Mac. Intell., September, 2023

Beyond explaining: Opportunities and challenges of XAI-based model improvement.
Inf. Fusion, April, 2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus.
Trans. Mach. Learn. Res., 2023

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond.
J. Mach. Learn. Res., 2023

Generative Fractional Diffusion Models.
CoRR, 2023

Layer-wise Feedback Propagation.
CoRR, 2023

From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space.
CoRR, 2023

XAI-based Comparison of Input Representations for Audio Event Classification.
CoRR, 2023

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models.
Proceedings of the Medical Image Computing and Computer Assisted Intervention - MICCAI 2023, 2023

Human-Centered Evaluation of XAI Methods.
Proceedings of the IEEE International Conference on Data Mining, 2023

Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models.
Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, 2023

Optimizing Explanations by Network Canonization and Hyperparameter Search.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

XAI-based Comparison of Audio Event Classifiers with different Input Representations.
Proceedings of the 20th International Conference on Content-based Multimedia Indexing, 2023

2022
Towards the interpretability of deep learning models for multi-modal neuroimaging: Finding structural changes of the ageing brain.
NeuroImage, 2022

Explain and improve: LRP-inference fine-tuning for image captioning models.
Inf. Fusion, 2022

Finding and removing Clever Hans: Using explanation methods to debug and improve deep models.
Inf. Fusion, 2022

Explaining Machine Learning Models for Clinical Gait Analysis.
ACM Trans. Comput. Heal., 2022

Explaining machine learning models for age classification in human gait analysis.
CoRR, 2022

Explaining automated gender classification of human gait.
CoRR, 2022

From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation.
CoRR, 2022

But that's not why: Inference adjustment by interactive prototype deselection.
CoRR, 2022

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations.
CoRR, 2022

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging.
CoRR, 2022

Measurably Stronger Explanation Reliability Via Model Canonization.
Proceedings of the 2022 IEEE International Conference on Image Processing, 2022

Selection of XAI Methods Matters: Evaluation of Feature Attribution Methods for Oculomotoric Biometric Identification.
Proceedings of The 1st Gaze Meets ML workshop, 2022

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI.
Proceedings of the Machine Learning and Knowledge Extraction, 2022

2021
Pruning by explaining: A novel criterion for deep neural network pruning.
Pattern Recognit., 2021

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications.
Proc. IEEE, 2021

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy.
CoRR, 2021

2020
Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond.
CoRR, 2020

Understanding Image Captioning Models beyond Visualizing Attention.
CoRR, 2020

Towards Best Practice in Explaining Neural Network Decisions with LRP.
Proceedings of the 2020 International Joint Conference on Neural Networks, 2020

Explanation-Guided Training for Cross-Domain Few-Shot Classification.
Proceedings of the 25th International Conference on Pattern Recognition, 2020

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution.
Proceedings of the 25th International Conference on Pattern Recognition, 2020

ECQ<sup> x</sup>: Explainability-Driven Quantization for Low-Bit and Sparse DNNs.
Proceedings of the xxAI - Beyond Explainable AI, 2020

2019
Layer-Wise Relevance Propagation: An Overview.
Proceedings of the Explainable AI: Interpreting, 2019

Opening the machine learning black box with Layer-wise Relevance Propagation.
PhD thesis, 2019

iNNvestigate Neural Networks!
J. Mach. Learn. Res., 2019

Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed.
CoRR, 2019

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning.
CoRR, 2019

On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence.
CoRR, 2019

Resolving challenges in deep learning-based analyses of histopathological images using explanation methods.
CoRR, 2019

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn.
CoRR, 2019

2018
What is Unique in Individual Gait Patterns? Understanding and Interpreting Deep Learning in Gait Analysis.
CoRR, 2018

Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals.
CoRR, 2018

2017
Evaluating the Visualization of What a Deep Neural Network Has Learned.
IEEE Trans. Neural Networks Learn. Syst., 2017

Explaining nonlinear classification decisions with deep Taylor decomposition.
Pattern Recognit., 2017

Understanding and Comparing Deep Neural Networks for Age and Gender Classification.
Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017

Interpretable human action recognition in compressed domain.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

2016
The LRP Toolbox for Artificial Neural Networks.
J. Mach. Learn. Res., 2016

Interpretable Deep Neural Networks for Single-Trial EEG Classification.
CoRR, 2016

Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation.
CoRR, 2016

Controlling explanatory heatmap resolution and semantics via decomposition depth.
Proceedings of the 2016 IEEE International Conference on Image Processing, 2016

Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers.
Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2016, 2016

Analyzing Classifiers: Fisher Vectors and Deep Neural Networks.
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016

2014
Detecting Behavioral and Structural Anomalies in MediaCloud Applications.
CoRR, 2014


  Loading...