Maximilian Dreyer
Orcid: 0009-0007-9069-6265
According to our database1,
Maximilian Dreyer
authored at least 17 papers
between 2020 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2025
Attribution-guided Pruning for Compression, Circuit Discovery, and Targeted Correction in LLMs.
CoRR, June, 2025
From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance.
CoRR, May, 2025
CoRR, January, 2025
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025
2024
PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits.
Proceedings of the 3rd Explainable AI for Computer Vision (XAI4CV) Workshop, 2024
Explainable Concept Mappings of MRI: Revealing the Mechanisms Underlying Deep Learning-Based Brain Disease Classification.
Proceedings of the Explainable Artificial Intelligence, 2024
Proceedings of the Forty-first International Conference on Machine Learning, 2024
Pruning by Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers.
Proceedings of the Computer Vision - ECCV 2024 Workshops, 2024
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024
2023
From attribution maps to human-understandable explanations through Concept Relevance Propagation.
Nat. Mac. Intell., September, 2023
From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space.
CoRR, 2023
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models.
Proceedings of the Medical Image Computing and Computer Assisted Intervention - MICCAI 2023, 2023
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023
2022
From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation.
CoRR, 2022
2020
Proceedings of the xxAI - Beyond Explainable AI, 2020