Wolfgang Stammer

Orcid: 0000-0003-3793-8046

According to our database1, Wolfgang Stammer authored at least 20 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Pix2Code: Learning to Compose Neural Visual Concepts as Programs.
CoRR, 2024

Where is the Truth? The Risk of Getting Confounded in a Continual World.
CoRR, 2024

Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents.
CoRR, 2024

2023
Explanatory Interactive Machine Learning.
Bus. Inf. Syst. Eng., December, 2023

A typology for exploring the mitigation of shortcut behaviour.
Nat. Mac. Intell., March, 2023

Leveraging explanations in interactive machine learning: An overview.
Frontiers Artif. Intell., February, 2023

Learning by Self-Explaining.
CoRR, 2023

Learning to Intervene on Concept Bottlenecks.
CoRR, 2023

V-LoL: A Diagnostic Dataset for Visual Logical Learning.
CoRR, 2023

Boosting Object Representation Learning via Motion and Object Continuity.
Proceedings of the Machine Learning and Knowledge Discovery in Databases: Research Track, 2023

Revision Transformers: Instructing Language Models to Change Their Values.
Proceedings of the ECAI 2023 - 26th European Conference on Artificial Intelligence, September 30 - October 4, 2023, Kraków, Poland, 2023

2022
Revision Transformers: Getting RiT of No-Nos.
CoRR, 2022

A Typology to Explore and Guide Explanatory Interactive Machine Learning.
CoRR, 2022

Neural-Probabilistic Answer Set Programming.
Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning, 2022

Interactive Disentanglement: Learning Concepts by Interacting with their Prototype Representations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
SLASH: Embracing Probabilistic Circuits into Neural Answer Set Programming.
CoRR, 2021

Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting With Their Explanations.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Making deep neural networks right for the right scientific reasons by interacting with their explanations.
Nat. Mach. Intell., 2020

Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations.
CoRR, 2020


  Loading...