Mingjia Shi

Orcid: 0000-0002-9988-3741

According to our database1, Mingjia Shi authored at least 24 papers between 2020 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Mano Report.
CoRR, September, 2025

E-3SFC: Communication-Efficient Federated Learning With Double-Way Features Synthesizing.
IEEE Trans. Neural Networks Learn. Syst., August, 2025

Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights.
CoRR, June, 2025

REPA Works Until It Doesn't: Early-Stopped, Holistic Alignment Supercharges Diffusion Training.
CoRR, May, 2025

DD-Ranking: Rethinking the Evaluation of Dataset Distillation.
CoRR, May, 2025

Make Optimization Once and for All with Fine-grained Guidance.
CoRR, March, 2025

FedSH: Tackling Staleness By Scheduling High-order Approximation in Asynchronous Federated Learning.
Proceedings of the International Joint Conference on Neural Networks, 2025

Towards Frame Rate Insensitive Video Object Segmentation.
Proceedings of the International Joint Conference on Neural Networks, 2025

Ferret: An Efficient Online Continual Learning Framework under Varying Memory Constraints.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
Faster Vision Mamba is Rebuilt in Minutes via Merged Token Re-training.
CoRR, 2024

Tackling Feature-Classifier Mismatch in Federated Learning via Prompt-Driven Feature Transformation.
CoRR, 2024

A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training.
CoRR, 2024

2023
DLB: A Dynamic Load Balance Strategy for Distributed Training of Deep Neural Networks.
IEEE Trans. Emerg. Top. Comput. Intell., August, 2023

PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning.
CoRR, 2023

Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence.
CoRR, 2023

PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Unconstrained Feature Model and Its General Geometric Patterns in Federated Learning: Local Subspace Minority Collapse.
Proceedings of the Neural Information Processing - 30th International Conference, 2023

Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
Correction to: FLSGD: free local SGD with parallel synchronization.
J. Supercomput., 2022

FLSGD: free local SGD with parallel synchronization.
J. Supercomput., 2022

Personalized Federated Learning with Hidden Information on Personalized Prior.
CoRR, 2022

2020
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training.
CoRR, 2020


  Loading...