Pingzhi Li
Orcid: 0009-0007-9935-4456
According to our database1,
Pingzhi Li
authored at least 22 papers
between 2024 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2025
CoRR, May, 2025
Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training and Inference.
CoRR, May, 2025
GroverGPT-2: Simulating Grover's Algorithm via Chain-of-Thought Reasoning and Quantum-Native Tokenization.
CoRR, May, 2025
Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations.
CoRR, April, 2025
CoRR, March, 2025
CoRR, January, 2025
Protecting Privacy against Membership Inference Attack with LLM Fine-tuning through Flatness.
Proceedings of the 2025 SIAM International Conference on Data Mining, 2025
Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert Parallelism Design.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025
PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025
Unveiling Privacy Risks in Multi-modal Large Language Models: Task-specific Vulnerabilities and Mitigation Challenges.
Proceedings of the Findings of the Association for Computational Linguistics, 2025
Proceedings of the Findings of the Association for Computational Linguistics, 2025
2024
HEXA-MoE: Efficient and Heterogeneous-aware MoE Acceleration with ZERO Computation Redundancy.
CoRR, 2024
PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches.
CoRR, 2024
Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent.
CoRR, 2024
Proceedings of the 5th IEEE International Conference on Trust, 2024
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
Proceedings of the Twelfth International Conference on Learning Representations, 2024