Yifei Wang
Orcid: 0000-0002-0364-0893Affiliations:
- Stanford University, Department of Electrical Engineering, CA, USA (PhD 2025)
According to our database1,
Yifei Wang
authored at least 21 papers
between 2019 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on linkedin.com
-
on orcid.org
On csauthors.net:
Bibliography
2025
Overparameterized ReLU Neural Networks Learn the Simplest Model: Neural Isometry and Phase Transitions.
IEEE Trans. Inf. Theory, March, 2025
2024
Correction to: Sketching the Krylov subspace: faster computation of the entire ridge regularization path.
J. Supercomput., January, 2024
Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization.
SIAM J. Math. Data Sci., 2024
A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features.
CoRR, 2024
Proceedings of the 6th Conference on Advances in Financial Technologies, 2024
2023
Sketching the Krylov subspace: faster computation of the entire ridge regularization path.
J. Supercomput., November, 2023
SIAM J. Optim., September, 2023
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes.
CoRR, 2023
Proceedings of the Eleventh International Conference on Learning Representations, 2023
2022
SIAM/ASA J. Uncertain. Quantification, 2022
CoRR, 2022
CoRR, 2022
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program.
Proceedings of the Tenth International Conference on Learning Representations, 2022
The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions.
Proceedings of the Tenth International Conference on Learning Representations, 2022
2021
Search Direction Correction with Normalized Gradient Makes First-Order Methods Faster.
SIAM J. Sci. Comput., 2021
Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality.
Proceedings of the 38th International Conference on Machine Learning, 2021
2020
CoRR, 2020
2019