Carlo D'Eramo
Orcid: 0000-0003-2712-118X
According to our database1,
Carlo D'Eramo
authored at least 44 papers
between 2016 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2025
Learning to Explore in Diverse Reward Settings via Temporal-Difference-Error Maximization.
CoRR, June, 2025
Bridging the Performance Gap Between Target-Free and Target-Based Reinforcement Learning With Iterated Q-Learning.
CoRR, June, 2025
Dynamic Obstacle Avoidance with Bounded Rationality Adversarial Reinforcement Learning.
CoRR, March, 2025
Eau De <i>Q</i>-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning.
CoRR, March, 2025
CoRR, February, 2025
Trans. Mach. Learn. Res., 2025
Iterated <i>Q</i>-Network: Beyond One-Step Bellman Updates in Deep Reinforcement Learning.
Trans. Mach. Learn. Res., 2025
Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, 2025
Proceedings of the Thirteenth International Conference on Learning Representations, 2025
2024
IEEE Trans. Pattern Anal. Mach. Intell., November, 2024
J. Artif. Intell. Res., 2024
Proceedings of the KI 2024: Advances in Artificial Intelligence, 2024
Proceedings of the IEEE International Conference on Robotics and Automation, 2024
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024
2023
Composable energy policies for reactive motion generation and reinforcement learning.
Int. J. Robotics Res., September, 2023
CoRR, 2023
2022
Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning.
Algorithms, 2022
Proceedings of the International Joint Conference on Neural Networks, 2022
Proceedings of the International Conference on Machine Learning, 2022
Proceedings of the Tenth International Conference on Learning Representations, 2022
2021
A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning.
J. Mach. Learn. Res., 2021
Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning.
Proceedings of the IEEE International Conference on Robotics and Automation, 2021
Proceedings of the 38th International Conference on Machine Learning, 2021
2020
Frontiers Robotics AI, 2020
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020
Proceedings of the 8th International Conference on Learning Representations, 2020
2019
On the exploitation of uncertainty to improve Bellman updates and exploration in Reinforcement Learning.
PhD thesis, 2019
Proceedings of the International Joint Conference on Neural Networks, 2019
Proceedings of the International Joint Conference on Neural Networks, 2019
2017
Exploiting structure and uncertainty of Bellman updates in Markov decision processes.
Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence, 2017
Proceedings of the 34th International Conference on Machine Learning, 2017
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017
2016
Proceedings of the 33nd International Conference on Machine Learning, 2016