Nicolas Bougie

Orcid: 0000-0001-9856-0038

According to our database1, Nicolas Bougie authored at least 13 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Interpretable Imitation Learning with Symbolic Rewards.
ACM Trans. Intell. Syst. Technol., February, 2024

2022
Hierarchical learning from human preferences and curiosity.
Appl. Intell., 2022

Local Control is All You Need: Decentralizing and Coordinating Reinforcement Learning for Large-Scale Process Control.
Proceedings of the 61st IEEE Annual Conference of the Society of Instrument and Control Engineers, 2022

2021
Efficient Reinforcement Learning through Improved Cognitive Capabilities.
PhD thesis, 2021

Fast and slow curiosity for high-level exploration in reinforcement learning.
Appl. Intell., 2021

Goal-driven active learning.
Auton. Agents Multi Agent Syst., 2021

2020
Skill-based curiosity for intrinsically motivated reinforcement learning.
Mach. Learn., 2020

Towards Interpretable Reinforcement Learning with State Abstraction Driven by External Knowledge.
IEICE Trans. Inf. Syst., 2020

Intrinsically Motivated Lifelong Exploration in Reinforcement Learning.
Proceedings of the Advances in Artificial Intelligence, 2020

Towards High-Level Intrinsic Exploration in Reinforcement Learning.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

Exploration via Progress-Driven Intrinsic Rewards.
Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2020, 2020

2018
Deep reinforcement learning boosted by external knowledge.
Proceedings of the 33rd Annual ACM Symposium on Applied Computing, 2018

Abstracting Reinforcement Learning Agents with Prior Knowledge.
Proceedings of the PRIMA 2018: Principles and Practice of Multi-Agent Systems - 21st International Conference, Tokyo, Japan, October 29, 2018


  Loading...