Christoph Molnar

Orcid: 0000-0003-2331-868X

According to our database1, Christoph Molnar authored at least 17 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process.
Proceedings of the Explainable Artificial Intelligence, 2023

2022
Model-agnostic interpretable machine learning.
PhD thesis, 2022

Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena.
CoRR, 2022

Marginal Effects for Non-Linear Prediction Functions.
CoRR, 2022

2021
Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process.
CoRR, 2021

2020
Pitfalls to Avoid when Interpreting Machine Learning Models.
CoRR, 2020

Model-agnostic Feature Importance and Effects with Dependent Features - A Conditional Subgroup Approach.
CoRR, 2020

Multi-Objective Counterfactual Explanations.
Proceedings of the Parallel Problem Solving from Nature - PPSN XVI, 2020

Interpretable Machine Learning - A Brief History, State-of-the-Art and Challenges.
Proceedings of the ECML PKDD 2020 Workshops, 2020

Relative Feature Importance.
Proceedings of the 25th International Conference on Pattern Recognition, 2020

General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models.
Proceedings of the xxAI - Beyond Explainable AI, 2020

Explainable AI Methods - A Brief Overview.
Proceedings of the xxAI - Beyond Explainable AI, 2020

2019
Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition.
CoRR, 2019

Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2019

Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2019

2018
iml: An R package for Interpretable Machine Learning.
J. Open Source Softw., 2018

Visualizing the Feature Importance for Black Box Models.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2018


  Loading...