Will Dabney

According to our database1, Will Dabney authored at least 50 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Disentangling the Causes of Plasticity Loss in Neural Networks.
CoRR, 2024

A Distributional Analogue to the Successor Representation.
CoRR, 2024

Near-Minimax-Optimal Distributional Reinforcement Learning with a Generative Model.
CoRR, 2024

Off-policy Distributional Q(λ): Distributional RL without Importance Sampling.
CoRR, 2024

2023
An Analysis of Quantile Temporal-Difference Learning.
CoRR, 2023

Deep Reinforcement Learning with Plasticity Injection.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Understanding Self-Predictive Learning for Reinforcement Learning.
Proceedings of the International Conference on Machine Learning, 2023

The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation.
Proceedings of the International Conference on Machine Learning, 2023

Quantile Credit Assignment.
Proceedings of the International Conference on Machine Learning, 2023

Understanding Plasticity in Neural Networks.
Proceedings of the International Conference on Machine Learning, 2023

Bootstrapped Representations in Reinforcement Learning.
Proceedings of the International Conference on Machine Learning, 2023

Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition.
Proceedings of the International Conference on Machine Learning, 2023

Settling the Reward Hypothesis.
Proceedings of the International Conference on Machine Learning, 2023

2022
Learning Dynamics and Generalization in Reinforcement Learning.
CoRR, 2022

The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

On the Expressivity of Markov Reward (Extended Abstract).
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022

Generalised Policy Improvement with Geometric Policy Composition.
Proceedings of the International Conference on Machine Learning, 2022

Learning Dynamics and Generalization in Deep Reinforcement Learning.
Proceedings of the International Conference on Machine Learning, 2022

Understanding and Preventing Capacity Loss in Reinforcement Learning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
The Difficulty of Passive Learning in Deep Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

On the Expressivity of Markov Reward.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Counterfactual Credit Assignment in Model-Free Reinforcement Learning.
Proceedings of the 38th International Conference on Machine Learning, 2021

Revisiting Peng's Q(λ) for Modern Reinforcement Learning.
Proceedings of the 38th International Conference on Machine Learning, 2021

Temporally-Extended ε-Greedy Exploration.
Proceedings of the 9th International Conference on Learning Representations, 2021

On the Effect of Auxiliary Tasks on Representation Dynamics.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

The Value-Improvement Path: Towards Better Representations for Reinforcement Learning.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
A distributional code for value in dopamine-based reinforcement learning.
Nat., 2020

Counterfactual Credit Assignment in Model-Free Reinforcement Learning.
CoRR, 2020

Deep Reinforcement Learning and its Neuroscientific Implications.
CoRR, 2020

Revisiting Fundamentals of Experience Replay.
Proceedings of the 37th International Conference on Machine Learning, 2020

Fast Task Inference with Variational Intrinsic Successor Features.
Proceedings of the 8th International Conference on Learning Representations, 2020

Conditional Importance Sampling for Off-Policy Learning.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

Adaptive Trade-Offs in Off-Policy Learning.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
Adapting Behaviour for Learning Progress.
CoRR, 2019

Hindsight Credit Assignment.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

A Geometric Perspective on Optimal Representations for Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Statistics and Samples in Distributional Reinforcement Learning.
Proceedings of the 36th International Conference on Machine Learning, 2019

Recurrent Experience Replay in Distributed Reinforcement Learning.
Proceedings of the 7th International Conference on Learning Representations, 2019

The Termination Critic.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Low-pass Recurrent Neural Networks - A memory architecture for longer-term correlation discovery.
CoRR, 2018

Autoregressive Quantile Networks for Generative Modeling.
Proceedings of the 35th International Conference on Machine Learning, 2018

Implicit Quantile Networks for Distributional Reinforcement Learning.
Proceedings of the 35th International Conference on Machine Learning, 2018

The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning.
Proceedings of the 6th International Conference on Learning Representations, 2018

Distributed Distributional Deterministic Policy Gradients.
Proceedings of the 6th International Conference on Learning Representations, 2018

An Analysis of Categorical Distributional Reinforcement Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018

Rainbow: Combining Improvements in Deep Reinforcement Learning.
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018

Distributional Reinforcement Learning With Quantile Regression.
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018

2017
The Cramer Distance as a Solution to Biased Wasserstein Gradients.
CoRR, 2017

Successor Features for Transfer in Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

A Distributional Perspective on Reinforcement Learning.
Proceedings of the 34th International Conference on Machine Learning, 2017


  Loading...