Lingjiong Zhu

Orcid: 0000-0001-7595-160X

According to our database1, Lingjiong Zhu authored at least 34 papers between 2013 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Differential Privacy of Noisy (S)GD under Heavy-Tailed Perturbations.
CoRR, 2024

Intriguing Differences Between Zero-Shot and Systematic Evaluations of Vision-Language Transformer Models.
CoRR, 2024

Convergence Analysis for General Probability Flow ODEs of Diffusion Models in Wasserstein Distances.
CoRR, 2024

2023
Asymptotics for the Laplace transform of the time integral of the geometric Brownian motion.
Oper. Res. Lett., May, 2023

Wasserstein Convergence Guarantees for a General Class of Score-Based Generative Models.
CoRR, 2023

Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD.
CoRR, 2023

Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions.
Proceedings of the International Conference on Machine Learning, 2023

Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on Least Squares.
Proceedings of the International Conference on Algorithmic Learning Theory, 2023

2022
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks.
J. Mach. Learn. Res., 2022

Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Nonconvex Stochastic Optimization: Nonasymptotic Performance Bounds and Momentum-Based Acceleration.
Oper. Res., 2022

Penalized Langevin and Hamiltonian Monte Carlo Algorithms for Constrained Sampling.
CoRR, 2022

Heavy-Tail Phenomenon in Decentralized SGD.
CoRR, 2022

2021
On the optimal design of the randomized unbiased Monte Carlo estimators.
Oper. Res. Lett., 2021

Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo.
J. Mach. Learn. Res., 2021

Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

The Heavy-Tail Phenomenon in SGD.
Proceedings of the 38th International Conference on Machine Learning, 2021

Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Operational Risk Management: A Stochastic Control Framework with Preventive and Corrective Controls.
Oper. Res., 2020

On the Variance of Single-Run Unbiased Stochastic Derivative Estimators.
INFORMS J. Comput., 2020

Optimal unbiased estimation for expected cumulative discounted cost.
Eur. J. Oper. Res., 2020

Breaking Reversibility Accelerates Langevin Dynamics for Non-Convex Optimization.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Asymptotic normality of extensible grid sampling.
Stat. Comput., 2019

Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
Functional central limit theorems for stationary Hawkes processes and application to infinite-server queues.
Queueing Syst. Theory Appl., 2018

Explosion in the quasi-Gaussian HJM model.
Finance Stochastics, 2018

Breaking Reversibility Accelerates Langevin Dynamics for Global Non-Convex Optimization.
CoRR, 2018

Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration.
CoRR, 2018

2017
Small-noise limit of the quasi-Gaussian log-normal HJM model.
Oper. Res. Lett., 2017

2016
Short Maturity Asian Options in Local Volatility Models.
SIAM J. Financial Math., 2016

2014
Limit Theorems for a Cox-Ingersoll-Ross Process with Hawkes Jumps.
J. Appl. Probab., 2014

2013
Central Limit Theorem for Nonlinear Hawkes Processes.
J. Appl. Probab., 2013


  Loading...