Voot Tangkaratt

According to our database1, Voot Tangkaratt authored at least 27 papers between 2013 and 2022.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2022
Discovering diverse solutions in deep reinforcement learning by maximizing state-action-based mutual information.
Neural Networks, 2022

2021
Discovering Diverse Solutions in Deep Reinforcement Learning.
CoRR, 2021

Robust Imitation Learning from Noisy Demonstrations.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

Meta-Model-Based Meta-Policy Optimization.
Proceedings of the Asian Conference on Machine Learning, 2021

2020
Active deep Q-learning with demonstration.
Mach. Learn., 2020

Simultaneous Planning for Item Picking and Placing by Deep Reinforcement Learning.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020

Variational Imitation Learning with Diverse-quality Demonstrations.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
TD-regularized actor-critic methods.
Mach. Learn., 2019

VILD: Variational Imitation Learning with Diverse-quality Demonstrations.
CoRR, 2019

Imitation Learning from Imperfect Demonstration.
Proceedings of the 36th International Conference on Machine Learning, 2019

Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization.
Proceedings of the 7th International Conference on Learning Representations, 2019

2018
Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities.
Neural Comput., 2018

Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam.
Proceedings of the 35th International Conference on Machine Learning, 2018

Guide Actor-Critic for Continuous Control.
Proceedings of the 6th International Conference on Learning Representations, 2018

2017
Direct Estimation of the Derivative of Quadratic Mutual Information with Application in Supervised Dimension Reduction.
Neural Comput., 2017

Vprop: Variational Inference using RMSprop.
CoRR, 2017

Variational Adaptive-Newton Method for Explorative Learning.
CoRR, 2017

Policy Search with High-Dimensional Context Variables.
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017

2016
Trial and Error: Using Previous Experiences as Simulation Models in Humanoid Motor Learning.
IEEE Robotics Autom. Mag., 2016

Model-based reinforcement learning with dimension reduction.
Neural Networks, 2016

2015
Conditional Density Estimation with Dimensionality Reduction via Squared-Loss Conditional Entropy Minimization.
Neural Comput., 2015

Direct conditional probability density estimation with sparse feature selection.
Mach. Learn., 2015

Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities.
Proceedings of The 7th Asian Conference on Machine Learning, 2015

2014
Model-based policy gradients with parameter-based exploration by least-squares conditional density estimation.
Neural Networks, 2014

Efficient Reuse of Previous Experiences to Improve Policies in Real Environment.
CoRR, 2014

Efficient reuse of previous experiences in humanoid motor learning.
Proceedings of the 14th IEEE-RAS International Conference on Humanoid Robots, 2014

2013
Efficient Sample Reuse in Policy Gradients with Parameter-Based Exploration.
Neural Comput., 2013


  Loading...