Xiaohan Chen

Orcid: 0000-0002-0360-0402

Affiliations:
  • Alibaba Group, Damo Academy, Decision Intelligence Lab, USA
  • University of Texas at Austin, Department of Electrical and Computer and Engineering, Austin, TX, USA (2020 - 2022)
  • Texas A&M University, Department of Computer Science and Engineering, College Station, TX, USA (PhD 2020)


According to our database1, Xiaohan Chen authored at least 42 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee.
Trans. Mach. Learn. Res., 2024

Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs.
CoRR, 2024

Learning to optimize: A tutorial for continuous and mixed-integer optimization.
CoRR, 2024

Rethinking the Capacity of Graph Neural Networks for Branching Strategy.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

2023
SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
IEEE Trans. Neural Networks Learn. Syst., October, 2023

Chasing Better Deep Image Priors between Over- and Under-parameterization.
Trans. Mach. Learn. Res., 2023

DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee.
CoRR, 2023

Towards Constituting Mathematical Structures for Learning to Optimize.
Proceedings of the International Conference on Machine Learning, 2023

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Many-Task Federated Learning: A New Problem Setting and A Simple Baseline.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Safeguarded Learned Convex Optimization.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Learning to Optimize: A Primer and A Benchmark.
J. Mach. Learn. Res., 2022

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity.
CoRR, 2022

Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Model elasticity for hardware heterogeneity in federated learning systems.
Proceedings of the 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network, 2022

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity.
CoRR, 2021

SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training.
CoRR, 2021

Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Hyperparameter Tuning is All You Need for LISTA.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

The Elastic Lottery Ticket Hypothesis.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Learning A Minimax Optimizer: A Pilot Study.
Proceedings of the 9th International Conference on Learning Representations, 2021

A Design Space Study for LISTA and Beyond.
Proceedings of the 9th International Conference on Learning Representations, 2021

DynEHR: Dynamic adaptation of models with data heterogeneity in electronic health records.
Proceedings of the IEEE EMBS International Conference on Biomedical and Health Informatics, 2021

EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
ShiftAddNet: A Hardware-Inspired Deep Network.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

MATE: Plugging in Model Awareness to Task Embedding for Meta Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation.
Proceedings of the 47th ACM/IEEE Annual International Symposium on Computer Architecture, 2020

Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks.
Proceedings of the 8th International Conference on Learning Representations, 2020

Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving.
CoRR, 2019

Drawing early-bird tickets: Towards more efficient training of deep networks.
CoRR, 2019

E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Plug-and-Play Methods Provably Converge with Properly Trained Denoisers.
Proceedings of the 36th International Conference on Machine Learning, 2019

ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA.
Proceedings of the 7th International Conference on Learning Representations, 2019

2018
Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?
CoRR, 2018

Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Can We Gain More from Orthogonality Regularizations in Training Deep Networks?
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018


  Loading...