Takamitsu Matsubara

Orcid: 0000-0003-3545-4814

According to our database1, Takamitsu Matsubara authored at least 137 papers between 2005 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Leveraging Demonstrator-Perceived Precision for Safe Interactive Imitation Learning of Clearance-Limited Tasks.
IEEE Robotics Autom. Lett., 2024

Incipient Slip Detection by Vibration Injection Into Soft Sensor.
IEEE Robotics Autom. Lett., 2024

2023
Reinforcement Learning of Action and Query Policies With LTL Instructions Under Uncertain Event Detector.
IEEE Robotics Autom. Lett., November, 2023

Cautious policy programming: exploiting KL regularization for monotonic policy improvement in reinforcement learning.
Mach. Learn., November, 2023

AdaTerm: Adaptive T-distribution estimated robust moments for Noise-Robust stochastic gradient optimization.
Neurocomputing, November, 2023

Reinforcement Learning With Energy-Exchange Dynamics for Spring-Loaded Biped Robot Walking.
IEEE Robotics Autom. Lett., October, 2023

Learning to Shape by Grinding: Cutting-Surface-Aware Model-Based Reinforcement Learning.
IEEE Robotics Autom. Lett., October, 2023

Disturbance Injection Under Partial Automation: Robust Imitation Learning for Long-Horizon Tasks.
IEEE Robotics Autom. Lett., May, 2023

Multi-step motion learning by combining learning-from-demonstration and policy-search.
Adv. Robotics, May, 2023

Bayesian Disturbance Injection: Robust imitation learning of flexible policies for robot manipulation.
Neural Networks, January, 2023

Deep reinforcement learning of event-triggered communication and consensus-based control for distributed cooperative transport.
Robotics Auton. Syst., 2023

Cyclic policy distillation: Sample-efficient sim-to-real reinforcement learning with domain randomization.
Robotics Auton. Syst., 2023

Generalized Munchausen Reinforcement Learning using Tsallis KL Divergence.
CoRR, 2023

Jamming Gripper-Inspired Soft Jig for Perceptive Parts Fixing.
IEEE Access, 2023

Policy Optimization for Waste Crane Automation From Human Preferences.
IEEE Access, 2023

Domains as Objectives: Multi-Domain Reinforcement Learning with Convex-Coverage Set Learning for Domain Uncertainty Awareness.
IROS, 2023

Deep Segmented DMP Networks for Learning Discontinuous Motions.
Proceedings of the 19th IEEE International Conference on Automation Science and Engineering, 2023

2022
Goal-aware generative adversarial imitation learning from imperfect demonstration for robotic cloth manipulation.
Robotics Auton. Syst., 2022

Randomized-to-Canonical Model Predictive Control for Real-World Visual Robotic Manipulation.
IEEE Robotics Autom. Lett., 2022

Physically Consistent Preferential Bayesian Optimization for Food Arrangement.
IEEE Robotics Autom. Lett., 2022

Uncertainty-Aware Manipulation Planning Using Gravity and Environment Geometry.
IEEE Robotics Autom. Lett., 2022

Sample-efficient gear-ratio optimization for biomechanical energy harvester.
Int. J. Intell. Robotics Appl., 2022

Learning Locally, Communicating Globally: Reinforcement Learning of Multi-robot Task Allocation for Cooperative Transport.
CoRR, 2022

Enforcing KL Regularization in General Tsallis Entropy Reinforcement Learning via Advantage Learning.
CoRR, 2022

q-Munchausen Reinforcement Learning.
CoRR, 2022

Alleviating parameter-tuning burden in reinforcement learning for large-scale process control.
Comput. Chem. Eng., 2022

User intent estimation during robot learning using physical human robot interaction primitives.
Auton. Robots, 2022

Task-Relevant Encoding of Domain Knowledge in Dynamics Modeling: Application to Furnace Forecasting From Video.
IEEE Access, 2022

Variationally Autoencoded Dynamic Policy Programming for Robotic Cloth Manipulation Planning based on Raw Images.
Proceedings of the IEEE/SICE International Symposium on System Integration, 2022

Disturbance Suppression in Feedback Error Learning Control.
Proceedings of the 61st IEEE Annual Conference of the Society of Instrument and Control Engineers, 2022

Deep Koopman with Control: Spectral Analysis of Soft Robot Dynamics.
Proceedings of the 61st IEEE Annual Conference of the Society of Instrument and Control Engineers, 2022

Disturbance-injected Robust Imitation Learning with Task Achievement.
Proceedings of the 2022 International Conference on Robotics and Automation, 2022

Gaussian Process Self-triggered Policy Search in Weakly Observable Environments.
Proceedings of the 2022 International Conference on Robotics and Automation, 2022

TDE2-MBRL: Energy-exchange Dynamics Learning with Task Decomposition for Spring-loaded Bipedal Robot Locomotion.
Proceedings of the 21st IEEE-RAS International Conference on Humanoid Robots, 2022

2021
Uncertainty-Aware Contact-Safe Model-Based Reinforcement Learning.
IEEE Robotics Autom. Lett., 2021

Tactile Perception Based on Injected Vibration in Soft Sensor.
IEEE Robotics Autom. Lett., 2021

Binarized P-Network: Deep Reinforcement Learning of Robot Control from Raw Images on FPGA.
IEEE Robotics Autom. Lett., 2021

Variational policy search using sparse Gaussian process priors for learning multimodal optimal actions.
Neural Networks, 2021

Autonomous boat driving system using sample-efficient model predictive control-based reinforcement learning approach.
J. Field Robotics, 2021

Design of physical user-robot interactions for model identification of soft actuators on exoskeleton robots.
Int. J. Robotics Res., 2021

Cautious Policy Programming: Exploiting KL Regularization in Monotonic Policy Improvement for Reinforcement Learning.
CoRR, 2021

Innovative technologies for infrastructure construction and maintenance through collaborative robots based on an open design approach.
Adv. Robotics, 2021

Learning Robotic Contact Juggling.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021

Deep reinforcement learning of event-triggered communication and control for multi-agent cooperative transport.
Proceedings of the IEEE International Conference on Robotics and Automation, 2021

Bayesian Disturbance Injection: Robust Imitation Learning of Flexible Policies.
Proceedings of the IEEE International Conference on Robotics and Automation, 2021

Cautious Actor-Critic.
Proceedings of the Asian Conference on Machine Learning, 2021

Geometric Value Iteration: Dynamic Error-Aware KL Regularization for Reinforcement Learning.
Proceedings of the Asian Conference on Machine Learning, 2021

2020
Robust shape estimation with false-positive contact detection.
Robotics Auton. Syst., 2020

Quaternion-Based Trajectory Optimization of Human Postures for Inducing Target Muscle Activation Patterns.
IEEE Robotics Autom. Lett., 2020

Bayesian Policy Optimization for Waste Crane With Garbage Inhomogeneity.
IEEE Robotics Autom. Lett., 2020

Exploiting Visual-Outer Shape for Tactile-Inner Shape Estimation of Objects Covered with Soft Materials.
IEEE Robotics Autom. Lett., 2020

Learning Force Control for Contact-Rich Manipulation Tasks With Rigid Position-Controlled Robots.
IEEE Robotics Autom. Lett., 2020

Ensuring Monotonic Policy Improvement in Entropy-regularized Value-based Reinforcement Learning.
CoRR, 2020

Learning Contact-Rich Manipulation Tasks with Rigid Position-Controlled Robots: Learning to Force Control.
CoRR, 2020

Probabilistic active filtering with gaussian processes for occluded object search in clutter.
Appl. Intell., 2020

Learning Food-arrangement Policies from Raw Images with Generative Adversarial Imitation Learning.
Proceedings of the 17th International Conference on Ubiquitous Robots, 2020

Combining Model Predictive Path Integral with Kalman Variational Auto-encoder for Robot Control from Raw Images.
Proceedings of the 2020 IEEE/SICE International Symposium on System Integration, 2020

Learning Soft Robotic Assembly Strategies from Successful and Failed Demonstrations.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020

Dynamic Actor-Advisor Programming for Scalable Safe Reinforcement Learning.
Proceedings of the 2020 IEEE International Conference on Robotics and Automation, 2020

Sample-and-computation-efficient Probabilistic Model Predictive Control with Random Features.
Proceedings of the 2020 IEEE International Conference on Robotics and Automation, 2020

Contact-based in-hand pose estimation using Bayesian state estimation and particle filtering.
Proceedings of the 2020 IEEE International Conference on Robotics and Automation, 2020

2019
Deep reinforcement learning with smooth policy update: Application to robotic cloth manipulation.
Robotics Auton. Syst., 2019

Learning from demonstration for locally assistive mobility aids.
Int. J. Intell. Robotics Appl., 2019

Reinforcement Learning Ship Autopilot: Sample-efficient and Model Predictive Control-based Approach.
CoRR, 2019

Environment-adaptive interaction primitives through visual context for human-robot motor skill learning.
Auton. Robots, 2019

Special issue on artificial intelligence and machine learning for robotic manipulation.
Adv. Robotics, 2019

Reinforcement Learning Boat Autopilot: A Sample-efficient and Model Predictive Control based Approach.
Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2019

Multimodal Policy Search using Overlapping Mixtures of Sparse Gaussian Process Prior.
Proceedings of the International Conference on Robotics and Automation, 2019

Probabilistic Active Filtering for Object Search in Clutter.
Proceedings of the International Conference on Robotics and Automation, 2019

Exploiting Human and Robot Muscle Synergies for Human-in-the-loop Optimization of EMG-based Assistive Strategies.
Proceedings of the International Conference on Robotics and Automation, 2019

Generative Adversarial Imitation Learning with Deep P-Network for Robotic Cloth Manipulation.
Proceedings of the 19th IEEE-RAS International Conference on Humanoid Robots, 2019

Learning Deep Dynamical Models of a Waste Incineration Plant from In-furnace Images and Process Data.
Proceedings of the 15th IEEE International Conference on Automation Science and Engineering, 2019

2018
Folding Behavior Acquisition of A Shirt Placed on the Chest of a Dual-Arm Robot.
Proceedings of the IEEE International Conference on Information and Automation, 2018

Learning Mobility Aid Assistance via Decoupled Observation Models.
Proceedings of the 15th International Conference on Control, 2018

Policy Transfer from Simulations to Real World by Transfer Component Analysis.
Proceedings of the 14th IEEE International Conference on Automation Science and Engineering, 2018

Probabilistic Pose Estimation of Deformable Linear Objects.
Proceedings of the 14th IEEE International Conference on Automation Science and Engineering, 2018

Factorial Kernel Dynamic Policy Programming for Vinyl Acetate Monomer Plant Model Control.
Proceedings of the 14th IEEE International Conference on Automation Science and Engineering, 2018

Bayesian Optimisation of Exoskeleton Design Parameters.
Proceedings of the 7th IEEE International Conference on Biomedical Robotics and Biomechatronics, 2018

Biomechanical Energy Harvester with Continuously Variable Transmission: Prototyping and Preliminary Evaluation.
Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 2018

2017
Active tactile exploration with uncertainty and travel cost for fast shape estimation of unknown objects.
Robotics Auton. Syst., 2017

Learning assistive strategies for exoskeleton robots from user-robot physical interaction.
Pattern Recognit. Lett., 2017

Kernel dynamic policy programming: Applicable reinforcement learning to robot systems with high dimensional states.
Neural Networks, 2017

Generation of comfortable lifting motion for a human transfer assistant robot.
Int. J. Intell. Robotics Appl., 2017

Pneumatic artificial muscle-driven robot control using local update reinforcement learning.
Adv. Robotics, 2017

Kullback Leibler control approach to rubber band manipulation.
Proceedings of the IEEE/SICE International Symposium on System Integration, 2017

Learning discriminative intention predictors for sit-to-stand assistance.
Proceedings of the IEEE/SICE International Symposium on System Integration, 2017

Deep dynamic policy programming for robot control with raw images.
Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017

User-robot collaborative excitation for PAM model identification in exoskeleton robots.
Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017

Local driving assistance from demonstration for mobility aids.
Proceedings of the 2017 IEEE International Conference on Robotics and Automation, 2017

Learning task-parametrized assistive strategies for exoskeleton robots by multi-task reinforcement learning.
Proceedings of the 2017 IEEE International Conference on Robotics and Automation, 2017

Hanging work of T-shirt in consideration of deformability and stretchability.
Proceedings of the IEEE International Conference on Information and Automation, 2017

Model-based reinforcement learning approach for deformable linear object manipulation.
Proceedings of the 13th IEEE Conference on Automation Science and Engineering, 2017

2016
Input-Output Manifold Learning with State Space Models.
IEICE Trans. Fundam. Electron. Commun. Comput. Sci., 2016

An approach to realistic physical simulation of digitally captured deformable linear objects.
Proceedings of the 2016 IEEE International Conference on Simulation, 2016

Learning assistive strategies from a few user-robot interactions: Model-based reinforcement learning approach.
Proceedings of the 2016 IEEE International Conference on Robotics and Automation, 2016

Environment-adaptive interaction primitives for human-robot motor skill learning.
Proceedings of the 16th IEEE-RAS International Conference on Humanoid Robots, 2016

Kernel dynamic policy programming: Practical reinforcement learning for high-dimensional robots.
Proceedings of the 16th IEEE-RAS International Conference on Humanoid Robots, 2016

Latent Kullback-Leibler control for dynamic imitation learning of whole-body behaviors in humanoid robots.
Proceedings of the 16th IEEE-RAS International Conference on Humanoid Robots, 2016

Data-efficient human training of a care motion controller for human transfer assistant robots using Bayesian optimization.
Proceedings of the 6th IEEE International Conference on Biomedical Robotics and Biomechatronics, 2016

Active touch point selection with travel cost in tactile exploration for fast shape estimation of unknown objects.
Proceedings of the IEEE International Conference on Advanced Intelligent Mechatronics, 2016

2015
Dynamic Linear Bellman Combination of Optimal Policies for Solving New Tasks.
IEICE Trans. Fundam. Electron. Commun. Comput. Sci., 2015

Spatiotemporal synchronization of biped walking patterns with multiple external inputs by style-phase adaptation.
Biol. Cybern., 2015

Sequential intention estimation of a mobility aid user for intelligent navigational assistance.
Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication, 2015

Reinforcement learning of shared control for dexterous telemanipulation: Application to a page turning skill.
Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication, 2015

Local Update Dynamic Policy Programming in reinforcement learning of pneumatic artificial muscle-driven humanoid hand control.
Proceedings of the 15th IEEE-RAS International Conference on Humanoid Robots, 2015

2014
Latent Kullback Leibler Control for Continuous-State Systems using Probabilistic Graphical Models.
Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, 2014

Real-time estimation of Human-Cloth topological relationship using depth sensor for robotic clothing assistance.
Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, 2014

Object manifold learning with action features for active tactile object recognition.
Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014

An optimal control approach for exploratory actions in active tactile object recognition.
Proceedings of the 14th IEEE-RAS International Conference on Humanoid Robots, 2014

Style-phase adaptation of human and humanoid biped walking patterns in real systems.
Proceedings of the 14th IEEE-RAS International Conference on Humanoid Robots, 2014

Task-adaptive inertial parameter estimation of rigid-body dynamics with modeling error for model-based control using covariate shift adaptation.
Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 2014

2013
Bilinear Modeling of EMG Signals to Extract User-Independent Features for Multiuser Myoelectric Interface.
IEEE Trans. Biomed. Eng., 2013

Reinforcement learning of a motor skill for wearing a T-shirt using topology coordinates.
Adv. Robotics, 2013

Estimation of Human Cloth Topological Relationship using Depth Sensor for Robotic Clothing Assistance.
Proceedings of the Advances In Robotics 2013, 2013

2012
Real-time stylistic prediction for whole-body human motions.
Neural Networks, 2012

Adaptive choreography for user's preferences on personal robots.
Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012

Full-body exoskeleton robot control for walking assistance by style-phase adaptive pattern generation.
Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012

Spatio-temporal synchronization of periodic movements by style-phase adaptation: Application to biped walking.
Proceedings of the IEEE International Conference on Robotics and Automation, 2012

2011
Learning parametric dynamic movement primitives from multiple demonstrations.
Neural Networks, 2011

Learning motor skills with non-rigid materials by reinforcement learning.
Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, 2011

Learning and adaptation of a Stylistic Myoelectric Interface: EMG-based robotic control with individual user differences.
Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, 2011

Learning Parametric Inverse Dynamics Models from multiple conditions for fast adaptive computed torque control.
Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, 2011

XoR: Hybrid drive exoskeleton robot that can balance.
Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011

Reinforcement learning of clothing assistance with a dual-arm robot.
Proceedings of the 11th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2011), 2011

An optimal control approach for hybrid actuator system.
Proceedings of the 11th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2011), 2011

2010
Adaptive Step-size Policy Gradients with Average Reward Metric.
Proceedings of the 2nd Asian Conference on Machine Learning, 2010

Learning Stylistic Dynamic Movement Primitives from multiple demonstrations.
Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010

Optimal Feedback Control for anthropomorphic manipulators.
Proceedings of the IEEE International Conference on Robotics and Automation, 2010

Learning Basis Representations of Inverse Dynamics Models for Real-Time Adaptive Control.
Proceedings of the Neural Information Processing. Models and Applications, 2010

2008
Learning CPG-based Biped Locomotion with a Policy Gradient Method: Application to a Humanoid Robot.
Int. J. Robotics Res., 2008

Learning to Acquire Whole-Body Humanoid Center of Mass Movements to Achieve Dynamic Tasks.
Adv. Robotics, 2008

Highly Precise Dynamic Simulation Environment for Humanoid Robots.
Adv. Robotics, 2008

2007
Learning a dynamic policy by using policy gradient: application to biped walking.
Syst. Comput. Jpn., 2007

Learning to acquire whole-body humanoid CoM movements to achieve dynamic tasks.
Proceedings of the 2007 IEEE International Conference on Robotics and Automation, 2007

2006
Learning CPG-based biped locomotion with a policy gradient method.
Robotics Auton. Syst., 2006

2005
Learning Sensory Feedback to CPG with Policy Gradient for Biped Locomotion.
Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005

Learning CPG Sensory Feedback with Policy Gradient for Biped Locomotion for a Full-Body Humanoid.
Proceedings of the Proceedings, 2005


  Loading...