Zeke Xie

According to our database1, Zeke Xie authored at least 18 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior.
CoRR, 2024

Neural Field Classifiers via Target Encoding and Classification Loss.
CoRR, 2024

HiCAST: Highly Customized Arbitrary Style Transfer with Adapter Enhanced Diffusion Models.
CoRR, 2024

2023
On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

On the Overlooked Structure of Stochastic Gradients.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Dataset Pruning: Reducing Training Data by Examining Generalization Influence.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

S3IM: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
Rethinking the Structure of Stochastic Gradients: Empirical and Statistical Evidence.
CoRR, 2022

On the Power-Law Spectrum in Deep Learning: A Bridge to Protein Science.
CoRR, 2022

Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum.
Proceedings of the International Conference on Machine Learning, 2022

Sparse Double Descent: Where Network Pruning Aggravates Overfitting.
Proceedings of the International Conference on Machine Learning, 2022

2021
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting.
Neural Comput., 2021

Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization.
Proceedings of the 38th International Conference on Machine Learning, 2021

A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
Stable Weight Decay Regularization.
CoRR, 2020

Adai: Separating the Effects of Adaptive Learning Rate and Momentum Inertia.
CoRR, 2020

A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast.
CoRR, 2020

2017
A Quantum-Inspired Ensemble Method and Quantum-Inspired Forest Regressors.
Proceedings of The 9th Asian Conference on Machine Learning, 2017


  Loading...