Yifei Wang

Orcid: 0000-0002-0364-0893

Affiliations:
  • Stanford University, Department of Electrical Engineering, CA, USA (PhD 2025)


According to our database1, Yifei Wang authored at least 21 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Overparameterized ReLU Neural Networks Learn the Simplest Model: Neural Isometry and Phase Transitions.
IEEE Trans. Inf. Theory, March, 2025

2024
Correction to: Sketching the Krylov subspace: faster computation of the entire ridge regularization path.
J. Supercomput., January, 2024

Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization.
SIAM J. Math. Data Sci., 2024

Randomized Geometric Algebra Methods for Convex Neural Networks.
CoRR, 2024

A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features.
CoRR, 2024

A Circuit Approach to Constructing Blockchains on Blockchains.
Proceedings of the 6th Conference on Advances in Financial Technologies, 2024

2023
Sketching the Krylov subspace: faster computation of the entire ridge regularization path.
J. Supercomput., November, 2023

A Decomposition Augmented Lagrangian Method for Low-Rank Semidefinite Programming.
SIAM J. Optim., September, 2023

Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes.
CoRR, 2023

Parallel Deep Neural Networks Have Zero Duality Gap.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Projected Wasserstein Gradient Descent for High-Dimensional Bayesian Inference.
SIAM/ASA J. Uncertain. Quantification, 2022

Beyond the Best: Estimating Distribution Functionals in Infinite-Armed Bandits.
CoRR, 2022

ReLU Neural Networks Learn the Simplest Models: Neural Isometry and Exact Recovery.
CoRR, 2022

A stochastic Stein Variational Newton method.
CoRR, 2022

Beyond the Best: Distribution Functional Estimation in Infinite-Armed Bandits.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program.
Proceedings of the Tenth International Conference on Learning Representations, 2022

The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
Search Direction Correction with Normalized Gradient Makes First-Order Methods Faster.
SIAM J. Sci. Comput., 2021

Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Information Newton's flow: second-order optimization method in probability space.
CoRR, 2020

2019
Accelerated Information Gradient flow.
CoRR, 2019


  Loading...