Zhi-Qin John Xu

Orcid: 0000-0003-0627-3520

Affiliations:
  • Shanghai Jiao Tong University, School of Mathematical Sciences, Shanghai, China


According to our database1, Zhi-Qin John Xu authored at least 53 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Understanding Time Series Anomaly State Detection through One-Class Classification.
CoRR, 2024

Anchor function: a type of benchmark functions for studying language models.
CoRR, 2024

Solving multiscale dynamical systems by deep learning.
CoRR, 2024

2023
<i>DeepFlame</i>: A deep learning empowered open-source platform for reacting flow simulations.
Comput. Phys. Commun., October, 2023

Subspace decomposition based DNN algorithm for elliptic type multi-scale PDEs.
J. Comput. Phys., September, 2023

An Unsupervised Deep Learning Approach for the Wave Equation Inverse Problem.
CoRR, 2023

Optimistic Estimate Uncovers the Potential of Nonlinear Models.
CoRR, 2023

Solving a class of multi-scale elliptic PDEs by means of Fourier-based mixed physics informed neural networks.
CoRR, 2023

Stochastic Modified Equations and Dynamics of Dropout Algorithm.
CoRR, 2023

Loss Spike in Training Neural Networks.
CoRR, 2023

Understanding the Initial Condensation of Convolutional Neural Networks.
CoRR, 2023

Laplace-fPINNs: Laplace-based fractional physics-informed neural networks for solving forward and inverse problems of subdiffusion.
CoRR, 2023

Phase Diagram of Initial Condensation for Two-layer Neural Networks.
CoRR, 2023

2022
On the Exact Computation of Linear Frequency Principle Dynamics and Its Generalization.
SIAM J. Math. Data Sci., 2022

A regularised deep matrix factorised model of matrix completion for image restoration.
IET Image Process., 2022

Bayesian Inversion with Neural Operator (BINO) for Modeling Subdiffusion: Forward and Inverse Problems.
CoRR, 2022

Linear Stability Hypothesis and Rank Stratification for Nonlinear Models.
CoRR, 2022

Implicit regularization of dropout.
CoRR, 2022

Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks.
CoRR, 2022

An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation.
CoRR, 2022

Limitation of characterizing implicit regularization by data-independent functions.
CoRR, 2022

Overview frequency principle/spectral bias in deep learning.
CoRR, 2022

A multi-scale sampling method for accurate and robust deep neural network to predict combustion chemical kinetics.
CoRR, 2022

A deep learning-based model reduction (DeePMR) method for simplifying chemical kinetics.
CoRR, 2022

Towards Understanding the Condensation of Neural Networks at Initial Training.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

An Upper Limit of Decaying Rate with Respect to Frequency in Linear Frequency Principle Model.
Proceedings of the Mathematical and Scientific Machine Learning, 2022

2021
Phase Diagram for Two-layer ReLU Neural Networks at Infinite-width Limit.
J. Mach. Learn. Res., 2021

Embedding Principle: a hierarchical structure of loss landscape of deep neural networks.
CoRR, 2021

A variance principle explains why dropout finds flatter minima.
CoRR, 2021

Data-informed Deep Optimization.
CoRR, 2021

Force-in-domain GAN inversion.
CoRR, 2021

MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs.
CoRR, 2021

Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training.
CoRR, 2021

An Upper Limit of Decaying Rate with Respect to Frequency in Deep Neural Network.
CoRR, 2021

Linear Frequency Principle Model to Understand the Absence of Overfitting in Neural Networks.
CoRR, 2021

Frequency Principle in Deep Learning Beyond Gradient-descent-based Training.
CoRR, 2021

Embedding Principle of Loss Landscape of Deep Neural Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Deep Frequency Principle Towards Understanding Why Deeper Learning Is Faster.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Fourier-domain Variational Formulation and Its Well-posedness for Supervised Learning.
CoRR, 2020

A regularized deep matrix factorized model of matrix completion for image restoration.
CoRR, 2020

Multi-scale Deep Neural Network (MscaleDNN) for Solving Poisson-Boltzmann Equation in Complex Domains.
CoRR, 2020

Implicit bias with Ritz-Galerkin method in understanding deep learning for solving PDEs.
CoRR, 2020

A type of generalization error induced by initialization in deep neural networks.
Proceedings of Mathematical and Scientific Machine Learning, 2020

2019
Dynamical and Coupling Structure of Pulse-Coupled Networks in Maximum Entropy Analysis.
Entropy, 2019

Multi-scale Deep Neural Networks for Solving High Dimensional PDEs.
CoRR, 2019

Theory of the Frequency Principle for General Deep Neural Networks.
CoRR, 2019

Explicitizing an Implicit Bias of the Frequency Principle in Two-layer Neural Networks.
CoRR, 2019

Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks.
CoRR, 2019

Training Behavior of Deep Neural Network in Frequency Domain.
Proceedings of the Neural Information Processing - 26th International Conference, 2019

2018
Frequency Principle in Deep Learning with General Loss Functions and Its Potential Application.
CoRR, 2018

Maximum Entropy Principle Analysis in Network Systems with Short-time Recordings.
CoRR, 2018

Understanding training and generalization in deep learning by Fourier analysis.
CoRR, 2018


  Loading...