Ming Yin

Affiliations:
  • University of California, Santa Barbara, CA, USA


According to our database1, Ming Yin authored at least 20 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Offline Multitask Representation Learning for Reinforcement Learning.
CoRR, 2024

2023
Posterior Sampling with Delayed Feedback for Reinforcement Learning with Linear Function Approximation.
CoRR, 2023

Model-Free Algorithm with Improved Sample Efficiency for Zero-Sum Markov Games.
CoRR, 2023

Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data.
CoRR, 2023

Logarithmic Switching Cost in Reinforcement Learning beyond Linear MDPs.
CoRR, 2023

No-Regret Linear Bandits beyond Realizability.
Proceedings of the Uncertainty in Artificial Intelligence, 2023

Posterior Sampling with Delayed Feedback for Reinforcement Learning with Linear Function Approximation.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Offline Reinforcement Learning with Closed-Form Policy Improvement Operators.
Proceedings of the International Conference on Machine Learning, 2023

Non-stationary Reinforcement Learning under General Function Approximation.
Proceedings of the International Conference on Machine Learning, 2023

Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks.
CoRR, 2022

Offline stochastic shortest path: Learning, evaluation and towards optimality.
Proceedings of the Uncertainty in Artificial Intelligence, 2022

Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost.
Proceedings of the International Conference on Machine Learning, 2022

Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Towards Instance-Optimal Offline Reinforcement Learning with Pessimism.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Near-Optimal Offline Reinforcement Learning via Double Variance Reduction.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Near-Optimal Provable Uniform Convergence in Offline Policy Evaluation for Reinforcement Learning.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Near Optimal Provable Uniform Convergence in Off-Policy Evaluation for Reinforcement Learning.
CoRR, 2020

Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020


  Loading...