Mao Ye

Affiliations:
  • University of Texas at Austin, TX, USA


According to our database1, Mao Ye authored at least 23 papers between 2020 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Learning Diffusion Bridges on Constrained Domains.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Efficient Transformer-based 3D Object Detection with Dynamic Token Halting.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach.
CoRR, 2022

First Hitting Diffusion Models.
CoRR, 2022

Let us Build Bridges: Understanding and Extending Diffusion Generative Models.
CoRR, 2022

Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems.
Proceedings of the Uncertainty in Artificial Intelligence, 2022

Pareto navigation gradient descent: a first-order algorithm for optimization in pareto set.
Proceedings of the Uncertainty in Artificial Intelligence, 2022

Diffusion-based Molecule Generation with Informative Prior Bridges.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Centroid Approximation for Bootstrap: Improving Particle Quality at Inference.
Proceedings of the International Conference on Machine Learning, 2022

2021
Centroid Approximation for Bootstrap.
CoRR, 2021

argmax centroid.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments.
Proceedings of the 9th International Conference on Learning Representations, 2021

MaxUp: Lightweight Adversarial Training With Data Augmentation Improves Neural Network Training.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Adaptive Dense-to-Sparse Paradigm for Pruning Online Recommendation System with Non-Stationary Data.
CoRR, 2020

Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting.
CoRR, 2020

MaxUp: A Simple Way to Improve Generalization of Neural Network Training.
CoRR, 2020

Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Stein Self-Repulsive Dynamics: Benefits From Past Samples.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Go Wide, Then Narrow: Efficient Training of Deep Thin Networks.
Proceedings of the 37th International Conference on Machine Learning, 2020

Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection.
Proceedings of the 37th International Conference on Machine Learning, 2020

SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020


  Loading...