Subramanya Nageshrao

According to our database1, Subramanya Nageshrao authored at least 30 papers between 2014 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Toward Interpretable-AI Policies Using Evolutionary Nonlinear Decision Trees for Discrete-Action Systems.
IEEE Trans. Cybern., January, 2024

2023
A Risk-Averse Preview-Based Q-Learning Algorithm: Application to Highway Driving of Autonomous Vehicles.
IEEE Trans. Control. Syst. Technol., July, 2023

Interpretable Reinforcement Learning for Robotics and Continuous Control.
CoRR, 2023

2022
Game-Theoretic Lane-Changing Decision Making and Payoff Learning for Autonomous Vehicles.
IEEE Trans. Veh. Technol., 2022

An Online Evolving Method For a Safe and Fast Automated Vehicle Control System.
IEEE Trans. Syst. Man Cybern. Syst., 2022

A Three-Level Game-Theoretic Decision-Making Framework for Autonomous Vehicles.
IEEE Trans. Intell. Transp. Syst., 2022

Conflict-Aware Safe Reinforcement Learning: A Meta-Cognitive Learning Framework.
IEEE CAA J. Autom. Sinica, 2022

Robust AI Driving Strategy for Autonomous Vehicles.
CoRR, 2022

2021
Explaining Deep Learning Models Through Rule-Based Approximation and Visualization.
IEEE Trans. Fuzzy Syst., 2021

Finite-time Koopman Identifier: A Unified Batch-online Learning Framework for Joint Learning of Koopman Structure and Parameters.
CoRR, 2021

A Convex Programming Approach to Data-Driven Risk-Averse Reinforcement Learning.
CoRR, 2021

Assured Learning-enabled Autonomy: A Metacognitive Reinforcement Learning Framework.
CoRR, 2021

Interpretable AI Agent Through Nonlinear Decision Trees for Lane Change Problem.
Proceedings of the IEEE Symposium Series on Computational Intelligence, 2021

A One-shot Convex Optimization Approach to Risk-Averse Q-Learning.
Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), 2021

2020
Interpretable-AI Policies using Evolutionary Nonlinear Decision Trees for Discrete Action Systems.
CoRR, 2020

An online evolving framework for advancing reinforcement-learning based automated vehicle control.
CoRR, 2020

Deep Reinforcement Learning with Enhanced Safety for Autonomous Highway Driving.
Proceedings of the IEEE Intelligent Vehicles Symposium, 2020

Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles.
Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020

2019
Reinforcement learning based compensation methods for robot manipulators.
Eng. Appl. Artif. Intell., 2019

Autonomous Highway Driving using Deep Reinforcement Learning.
Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics, 2019

Discretionary Lane Change Decision Making using Reinforcement Learning with Model-Based Exploration.
Proceedings of the 18th IEEE International Conference On Machine Learning And Applications, 2019

Explainable Density-Based Approach for Self-Driving Actions Classification.
Proceedings of the 18th IEEE International Conference On Machine Learning And Applications, 2019

Interpretable Approximation of a Deep Reinforcement Learning Agent as a Set of If-Then Rules.
Proceedings of the 18th IEEE International Conference On Machine Learning And Applications, 2019

2017
Model-based real-time control of a magnetic manipulator system.
Proceedings of the 56th IEEE Annual Conference on Decision and Control, 2017

2016
Online learning algorithms: For passivity-based and distributed control.
PhD thesis, 2016

Port-Hamiltonian Systems in Adaptive and Learning Control: A Survey.
IEEE Trans. Autom. Control., 2016

Optimal model-free output synchronization of heterogeneous systems using off-policy reinforcement learning.
Autom., 2016

Actor-critic reinforcement learning for tracking control in robotics.
Proceedings of the 55th IEEE Conference on Decision and Control, 2016

2015
Reinforcement Learning for Port-Hamiltonian Systems.
IEEE Trans. Cybern., 2015

2014
Rapid learning in sequential composition control.
Proceedings of the 53rd IEEE Conference on Decision and Control, 2014


  Loading...