Jiaoyang Huang

According to our database1, Jiaoyang Huang authored at least 18 papers between 2019 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Sampling via Gradient Flows in the Space of Probability Measures.
CoRR, 2023

High-dimensional SGD aligns with emerging outlier eigenspaces.
CoRR, 2023

Gradient Flows for Sampling: Mean-Field Models, Gaussian Approximations and Affine Invariance.
CoRR, 2023

How Does Information Bottleneck Help Deep Learning?
Proceedings of the International Conference on Machine Learning, 2023

2022
Power Iteration for Tensor PCA.
J. Mach. Learn. Res., 2022

Efficient Derivative-free Bayesian Inference for Large-Scale Inverse Problems.
CoRR, 2022

PatchGT: Transformer Over Non-Trainable Clusters for Learning Graph Representations.
Proceedings of the Learning on Graphs Conference, 2022

Robustness Implies Generalization via Data-Dependent Generalization Bounds.
Proceedings of the International Conference on Machine Learning, 2022

2021
Long Random Matrices and Tensor Unfolding.
CoRR, 2021

Unscented Kalman Inversion: Efficient Gaussian Approximation to the Posterior Distribution.
CoRR, 2021

Improve Unscented Kalman Inversion With Low-Rank Approximation and Reduced-Order Model.
CoRR, 2021

Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

How Shrinking Gradient Noise Helps the Performance of Neural Networks.
Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), 2021

2020
Dynamics of Deep Neural Networks and Neural Tangent Hierarchy.
Proceedings of the 37th International Conference on Machine Learning, 2020

Towards Understanding the Dynamics of the First-Order Adversaries.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning.
Neural Comput., 2019

Effect of Depth and Width on Local Minima in Deep Learning.
Neural Comput., 2019

Gradient Descent Finds Global Minima for Generalizable Deep Neural Networks of Practical Sizes.
Proceedings of the 57th Annual Allerton Conference on Communication, 2019


  Loading...