Daniel Graves

Orcid: 0000-0002-5345-6584

According to our database1, Daniel Graves authored at least 26 papers between 2007 and 2022.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2022
Affordance as general value function: a computational model.
Adapt. Behav., 2022

Offline Learning of Counterfactual Predictions for Real-World Robotic Reinforcement Learning.
Proceedings of the 2022 International Conference on Robotics and Automation, 2022

What about Inputting Policy in Value Function: Policy Representation and Policy-Extended Value Function Approximator.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Learning robust driving policies without online exploration.
Proceedings of the IEEE International Conference on Robotics and Automation, 2021

Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems.
Proceedings of the AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, 2021

2020
LISPR: An Options Framework for Policy Reuse with Reinforcement Learning.
CoRR, 2020

Offline Learning of Counterfactual Perception as Prediction for Real-World Robotic Reinforcement Learning.
CoRR, 2020

SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving.
CoRR, 2020

What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator.
CoRR, 2020

Learning predictive representations in autonomous driving to improve deep reinforcement learning.
CoRR, 2020

Mapless Navigation among Dynamics with Social-safety-awareness: a reinforcement learning approach from 2D laser scans.
Proceedings of the 2020 IEEE International Conference on Robotics and Automation, 2020


Fixed-Horizon Temporal Difference Methods for Stable Reinforcement Learning.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Efficient decorrelation of features using Gramian in Reinforcement Learning.
CoRR, 2019

Performance analysis and optimization for scalable deployment of deep learning models for country-scale settlement mapping on Titan supercomputer.
Concurr. Comput. Pract. Exp., 2019

Importance Resampling for Off-policy Prediction.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Perception as prediction using general value functions in autonomous driving applications.
Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2019

Sequence Learning for Images Recognition in Videos with Differential Neural Networks.
Proceedings of the 18th IEEE International Conference on Cognitive Informatics & Cognitive Computing, 2019

2018
A Survey and Formal Analyses on Sequence Learning Methodologies and Deep Neural Networks.
Proceedings of the 17th IEEE International Conference on Cognitive Informatics & Cognitive Computing, 2018

2014
A Clustering-Based Graph Laplacian Framework for Value Function Approximation in Reinforcement Learning.
IEEE Trans. Cybern., 2014

2012
Clustering with proximity knowledge and relational knowledge.
Pattern Recognit., 2012

2010
Kernel-based fuzzy clustering and fuzzy clustering: A comparative experimental study.
Fuzzy Sets Syst., 2010

Proximity fuzzy clustering and its application to time series clustering and prediction.
Proceedings of the 10th International Conference on Intelligent Systems Design and Applications, 2010

2009
Fuzzy prediction architecture using recurrent neural networks.
Neurocomputing, 2009

Multivariate Segmentation of Time Series with Differential Evolution.
Proceedings of the Joint 2009 International Fuzzy Systems Association World Congress and 2009 European Society of Fuzzy Logic and Technology Conference, 2009

2007
Fuzzy C-Means, Gustafson-Kessel FCM, and Kernel-Based FCM: A Comparative Study.
Proceedings of the Analysis and Design of Intelligent Systems using Soft Computing Techniques, 2007


  Loading...