Sebastian Lapuschkin
Orcid: 0000-0002-0762-7258Affiliations:
- Fraunhofer Heinrich Hertz Institute, Berlin, Germany
According to our database1,
Sebastian Lapuschkin
authored at least 71 papers
between 2014 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on zbmath.org
-
on orcid.org
-
on d-nb.info
-
on dl.acm.org
On csauthors.net:
Bibliography
2024
Pattern Recognit., 2024
AudioMNIST: Exploring Explainable Artificial Intelligence for audio analysis on a simple benchmark.
J. Frankl. Inst., 2024
Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification.
CoRR, 2024
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression.
CoRR, 2024
PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits.
CoRR, 2024
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test.
CoRR, 2024
Proceedings of the Forty-first International Conference on Machine Learning, 2024
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024
2023
From attribution maps to human-understandable explanations through Concept Relevance Propagation.
Nat. Mac. Intell., September, 2023
Inf. Fusion, April, 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus.
Trans. Mach. Learn. Res., 2023
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond.
J. Mach. Learn. Res., 2023
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations.
CoRR, 2023
From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space.
CoRR, 2023
CoRR, 2023
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models.
Proceedings of the Medical Image Computing and Computer Assisted Intervention - MICCAI 2023, 2023
Proceedings of the IEEE International Conference on Data Mining, 2023
Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models.
Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, 2023
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023
XAI-based Comparison of Audio Event Classifiers with different Input Representations.
Proceedings of the 20th International Conference on Content-based Multimedia Indexing, 2023
2022
Towards the interpretability of deep learning models for multi-modal neuroimaging: Finding structural changes of the ageing brain.
NeuroImage, 2022
Inf. Fusion, 2022
Finding and removing Clever Hans: Using explanation methods to debug and improve deep models.
Inf. Fusion, 2022
ACM Trans. Comput. Heal., 2022
CoRR, 2022
From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation.
CoRR, 2022
CoRR, 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations.
CoRR, 2022
CoRR, 2022
Proceedings of the 2022 IEEE International Conference on Image Processing, 2022
Selection of XAI Methods Matters: Evaluation of Feature Attribution Methods for Oculomotoric Biometric Identification.
Proceedings of The 1st Gaze Meets ML workshop, 2022
Proceedings of the Machine Learning and Knowledge Extraction, 2022
2021
Pattern Recognit., 2021
Proc. IEEE, 2021
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy.
CoRR, 2021
2020
CoRR, 2020
Proceedings of the 2020 International Joint Conference on Neural Networks, 2020
Proceedings of the 25th International Conference on Pattern Recognition, 2020
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution.
Proceedings of the 25th International Conference on Pattern Recognition, 2020
Proceedings of the xxAI - Beyond Explainable AI, 2020
2019
Proceedings of the Explainable AI: Interpreting, 2019
PhD thesis, 2019
CoRR, 2019
On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence.
CoRR, 2019
Resolving challenges in deep learning-based analyses of histopathological images using explanation methods.
CoRR, 2019
CoRR, 2019
2018
What is Unique in Individual Gait Patterns? Understanding and Interpreting Deep Learning in Gait Analysis.
CoRR, 2018
Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals.
CoRR, 2018
2017
IEEE Trans. Neural Networks Learn. Syst., 2017
Pattern Recognit., 2017
Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, 2017
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017
2016
Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation.
CoRR, 2016
Proceedings of the 2016 IEEE International Conference on Image Processing, 2016
Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers.
Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2016, 2016
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016
2014