Prabuchandran K. J.

Orcid: 0000-0001-6585-390X

According to our database1, Prabuchandran K. J. authored at least 28 papers between 2013 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Energy Management in a Cooperative Energy Harvesting Wireless Sensor Network.
IEEE Commun. Lett., January, 2024

Practical First-Order Bayesian Optimization Algorithms.
Proceedings of the 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD), 2024

2023
Autonomous UAV Navigation in Complex Environments using Human Feedback.
Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication, 2023

Bayesian Optimization for Function Compositions with Applications to Dynamic Pricing.
Proceedings of the Learning and Intelligent Optimization - 17th International Conference, 2023

Efficient Off-Policy Algorithms for Structured Markov Decision Processes.
Proceedings of the 62nd IEEE Conference on Decision and Control, 2023

2022
Dominant strategy truthful, deterministic multi-armed bandit mechanisms with logarithmic regret for sponsored search auctions.
Appl. Intell., 2022

Change point detection for compositional multivariate data.
Appl. Intell., 2022

Data Efficient Safe Reinforcement Learning.
Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, 2022

Neural Network Compatible Off-Policy Natural Actor-Critic Algorithm.
Proceedings of the International Joint Conference on Neural Networks, 2022

2021
Novel First Order Bayesian Optimization with an Application to Reinforcement Learning.
Appl. Intell., 2021

2020
Reinforcement learning algorithm for non-stationary environments.
Appl. Intell., 2020

2019
An Online Sample-Based Method for Mode Estimation Using ODE Analysis of Stochastic Approximation Algorithms.
IEEE Control. Syst. Lett., 2019

Reinforcement Learning in Non-Stationary Environments.
CoRR, 2019

Actor-Critic Algorithms for Constrained Multi-agent Reinforcement Learning.
Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 2019

2018
Novel Sensor Scheduling Scheme for Intruder Tracking in Energy Efficient Sensor Networks.
IEEE Wirel. Commun. Lett., 2018

Generalized Deterministic Perturbations For Stochastic Gradient Search.
Proceedings of the 57th IEEE Conference on Decision and Control, 2018

2017
A Dominant Strategy Truthful, Deterministic Multi-Armed Bandit Mechanism with Logarithmic Regret.
Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, 2017

2016
Actor-Critic Algorithms with Online Feature Adaptation.
ACM Trans. Model. Comput. Simul., 2016

Information Diffusion in Social Networks in Two Phases.
IEEE Trans. Netw. Sci. Eng., 2016

Reinforcement Learning algorithms for regret minimization in structured Markov Decision Processes.
CoRR, 2016

Reinforcement Learning Algorithms for Regret Minimization in Structured Markov Decision Processes: (Extended Abstract).
Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 2016

2015
Energy Sharing for Multiple Sensor Nodes With Finite Buffers.
IEEE Trans. Commun., 2015

Decentralized learning for traffic signal control.
Proceedings of the 7th International Conference on Communication Systems and Networks, 2015

A Multi-phase Approach for Improving Information Diffusion in Social Networks.
Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 2015

2014
Multi-agent reinforcement learning for traffic signal control.
Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems, 2014

An actor critic algorithm based on Grassmanian search.
Proceedings of the 53rd IEEE Conference on Decision and Control, 2014

2013
Q-Learning Based Energy Management Policies for a Single Sensor Node with Finite Buffer.
IEEE Wirel. Commun. Lett., 2013

Feature Search in the Grassmanian in Online Reinforcement Learning.
IEEE J. Sel. Top. Signal Process., 2013


  Loading...