Carlo D'Eramo

Orcid: 0000-0003-2712-118X

According to our database1, Carlo D'Eramo authored at least 31 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Iterated Q-Network: Beyond the One-Step Bellman Operator.
CoRR, 2024

Parameterized Projected Bellman Operator.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Composable energy policies for reactive motion generation and reinforcement learning.
Int. J. Robotics Res., September, 2023

Contact Energy Based Hindsight Experience Prioritization.
CoRR, 2023

Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts.
CoRR, 2023

Domain Randomization via Entropy Maximization.
CoRR, 2023

Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula.
CoRR, 2023

On the Benefit of Optimal Transport for Curriculum Reinforcement Learning.
CoRR, 2023

Monte-Carlo tree search with uncertainty propagation via optimal transport.
CoRR, 2023

2022
A Unified Perspective on Value Backup and Exploration in Monte-Carlo Tree Search.
CoRR, 2022

Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning.
Algorithms, 2022

Prioritized Sampling with Intrinsic Motivation in Multi-Task Reinforcement Learning.
Proceedings of the International Joint Conference on Neural Networks, 2022

Curriculum Reinforcement Learning via Constrained Optimal Transport.
Proceedings of the International Conference on Machine Learning, 2022

Boosted Curriculum Reinforcement Learning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning.
J. Mach. Learn. Res., 2021

MushroomRL: Simplifying Reinforcement Learning Research.
J. Mach. Learn. Res., 2021

Gaussian Approximation for Bias Reduction in Q-Learning.
J. Mach. Learn. Res., 2021

Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning.
Proceedings of the IEEE International Conference on Robotics and Automation, 2021

Convex Regularization in Monte-Carlo Tree Search.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Multi-Channel Interactive Reinforcement Learning for Sequential Tasks.
Frontiers Robotics AI, 2020

Deep Reinforcement Learning with Weighted Q-Learning.
CoRR, 2020

Self-Paced Deep Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Generalized Mean Estimation in Monte-Carlo Tree Search.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

Sharing Knowledge in Multi-Task Deep Reinforcement Learning.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
On the exploitation of uncertainty to improve Bellman updates and exploration in Reinforcement Learning.
PhD thesis, 2019

Exploration Driven by an Optimistic Bellman Equation.
Proceedings of the International Joint Conference on Neural Networks, 2019

Exploiting Action-Value Uncertainty to Drive Exploration in Reinforcement Learning.
Proceedings of the International Joint Conference on Neural Networks, 2019

2017
Exploiting structure and uncertainty of Bellman updates in Markov decision processes.
Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence, 2017

Boosted Fitted Q-Iteration.
Proceedings of the 34th International Conference on Machine Learning, 2017

Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems.
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017

2016
Estimating Maximum Expected Value through Gaussian Approximation.
Proceedings of the 33nd International Conference on Machine Learning, 2016


  Loading...