Dongqi Cai

Orcid: 0000-0003-2751-2500

Affiliations:
  • University of Cambridge, UK
  • Beijing University of Posts and Telecommunications (BUPT), State Key Laboratory of Networking and Switching Technology, China


According to our database1, Dongqi Cai authored at least 29 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
MobiEdit: Resource-efficient Knowledge Editing for Personalized On-device LLMs.
CoRR, June, 2025

Resource-efficient Algorithms and Systems of Foundation Models: A Survey.
ACM Comput. Surv., May, 2025

Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?
CoRR, May, 2025

Evidencing Unauthorized Training Data from AI Generated Content using Information Isotopes.
CoRR, March, 2025

ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

DEPT: Decoupled Embeddings for Pre-training Language Models.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Demystifying Small Language Models for Edge Deployment.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
Accelerating Vertical Federated Learning.
IEEE Trans. Big Data, December, 2024

Photon: Federated LLM Pre-Training.
CoRR, 2024

Small Language Models: Survey, Measurements, and Insights.
CoRR, 2024

Recall: Empowering Multimodal Embedding for Edge Devices.
CoRR, 2024

FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts.
CoRR, 2024

Lightweight Protection for Privacy in Offloaded Speech Understanding.
CoRR, 2024

A Survey of Resource-efficient LLM and Multimodal Foundation Models.
CoRR, 2024

FwdLLM: Efficient Federated Finetuning of Large Language Models with Perturbed Inferences.
Proceedings of the 2024 USENIX Annual Technical Conference, 2024

SILENCE: Protecting privacy in offloaded speech understanding on resource-constrained devices.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Mobile Foundation Model as Firmware.
Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, 2024

FedRDMA: Communication-Efficient Cross-Silo Federated LLM via Chunked RDMA Transmission.
Proceedings of the 4th Workshop on Machine Learning and Systems, 2024

Large Language Models on Mobile Devices: Measurements, Analysis, and Insights.
Proceedings of the Workshop on Edge and Mobile Foundation Models, 2024

2023
Rethinking Mobile AI Ecosystem in the LLM Era.
CoRR, 2023

Federated Fine-tuning of Billion-Sized Language Models across Mobile Devices.
CoRR, 2023

Federated Few-Shot Learning for Mobile NLP.
Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, 2023

Efficient Federated Learning for Modern NLP.
Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, 2023

Towards Practical Few-shot Federated NLP.
Proceedings of the 3rd Workshop on Machine Learning and Systems, 2023

FedAdapter: Efficient Federated Learning for Mobile NLP.
Proceedings of the ACM Turing Award Celebration Conference - China 2023, 2023

2022
Federated NLP in Few-shot Scenarios.
CoRR, 2022

AUG-FedPrompt: Practical Few-shot Federated NLP with Data-augmented Prompts.
CoRR, 2022

AutoFedNLP: An efficient FedNLP framework.
CoRR, 2022

2021
Towards Ubiquitous Learning: A First Measurement of On-Device Training Performance.
Proceedings of the EMDL@MobiSys 2021: Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning, 2021


  Loading...