Emanuele Marconato

Orcid: 0000-0002-7407-5465

According to our database1, Emanuele Marconato authored at least 15 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
When Does Closeness in Distribution Imply Representational Similarity? An Identifiability Perspective.
CoRR, June, 2025

If Concept Bottlenecks are the Question, are Foundation Models the Answer?
CoRR, April, 2025

Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens.
CoRR, February, 2025

All or None: Identifiable Linear Properties of Next-Token Predictors in Language Modeling.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2025

2024
A Benchmark Suite for Systematically Evaluating Reasoning Shortcuts.
Dataset, June, 2024

A Benchmark Suite for Systematically Evaluating Reasoning Shortcuts.
CoRR, 2024

BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts.
Proceedings of the Uncertainty in Artificial Intelligence, 2024

A Neuro-Symbolic Benchmark Suite for Concept Quality and Reasoning Shortcuts.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

2023
Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning.
Entropy, December, 2023

Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Neuro-Symbolic Reasoning Shortcuts: Mitigation Strategies and their Limitations.
Proceedings of the 17th International Workshop on Neural-Symbolic Learning and Reasoning, 2023

Neuro-Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and Concept Rehearsal.
Proceedings of the International Conference on Machine Learning, 2023

2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models.
CoRR, 2022

GlanceNets: Interpretable, Leak-proof Concept-based Models.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Catastrophic Forgetting in Continual Concept Bottleneck Models.
Proceedings of the Image Analysis and Processing. ICIAP 2022 Workshops, 2022


  Loading...