Marina M.-C. Höhne

Orcid: 0000-0003-3090-6279

According to our database1, Marina M.-C. Höhne authored at least 24 papers between 2020 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test.
CoRR, 2024

Manipulating Feature Visualizations with Gradient Slingshots.
CoRR, 2024

2023
<i>This</i> looks <i>More</i> Like <i>that</i>: Enhancing Self-Explaining Models by Prototypical Relevance Propagation.
Pattern Recognit., April, 2023

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond.
J. Mach. Learn. Res., 2023

Explainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability.
CoRR, 2023

Prototypical Self-Explainable Models Without Re-training.
CoRR, 2023

Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors.
CoRR, 2023

Finding the right XAI method - A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science.
CoRR, 2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus.
CoRR, 2023

Finding Spurious Correlations with Function-Semantic Contrast Analysis.
Proceedings of the Explainable Artificial Intelligence, 2023

Labeling Neural Representations with Inverse Recognition.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches.
Proceedings of the International Symposium on Multi-Robot and Multi-Agent Systems, 2023

Mark My Words: Dangers of Watermarked Images in ImageNet.
Proceedings of the Artificial Intelligence. ECAI 2023 International Workshops - XAI³, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30, 2023

2022
DORA: Exploring outlier representations in Deep Neural Networks.
CoRR, 2022

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations.
CoRR, 2022

Visualizing the diversity of representations learned by Bayesian neural networks.
CoRR, 2022

ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Demonstrating the Risk of Imbalanced Datasets in Chest X-Ray Image-Based Diagnostics by Prototypical Relevance Propagation.
Proceedings of the 19th IEEE International Symposium on Biomedical Imaging, 2022

NoiseGrad - Enhancing Explanations by Introducing Stochasticity to Model Weights.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Nachvollziehbare Künstliche Intelligenz: Methoden, Chancen und Risiken.
Datenschutz und Datensicherheit, 2021

Self-Supervised Learning for 3D Medical Image Analysis using 3D SimCLR and Monte Carlo Dropout.
CoRR, 2021

This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation.
CoRR, 2021

Explaining Bayesian Neural Networks.
CoRR, 2021

2020
How Much Can I Trust You? - Quantifying Uncertainties in Explaining Neural Networks.
CoRR, 2020


  Loading...