Anna Hedström

Orcid: 0009-0007-7431-7923

According to our database1, Anna Hedström authored at least 15 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Capturing Polysemanticity with PRISM: A Multi-Concept Feature Description Framework.
CoRR, June, 2025

Evaluating Interpretable Methods via Geometric Alignment of Functional Distortions.
Trans. Mach. Learn. Res., 2025

Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
Benchmarking XAI Explanations with Human-Aligned Evaluations.
CoRR, 2024

Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond.
CoRR, 2024

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test.
CoRR, 2024

A Fresh Look at Sanity Checks for Saliency Maps.
Proceedings of the Explainable Artificial Intelligence, 2024

CoSy: Evaluating Textual Explanations of Neurons.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Explainable AI in grassland monitoring: Enhancing model performance and domain adaptability.
Proceedings of the 44. GIL-Jahrestagung, Informatik in der Land-, Forst- und Ernährungswirtschaft, 2024

From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation.
Proceedings of the Computer Vision - ECCV 2024 Workshops, 2024

2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus.
Trans. Mach. Learn. Res., 2023

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond.
J. Mach. Learn. Res., 2023

Finding the right XAI method - A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science.
CoRR, 2023

2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations.
CoRR, 2022

NoiseGrad - Enhancing Explanations by Introducing Stochasticity to Model Weights.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022


  Loading...