Pingzhi Li

Orcid: 0009-0007-9935-4456

According to our database1, Pingzhi Li authored at least 22 papers between 2024 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation.
CoRR, May, 2025

Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training and Inference.
CoRR, May, 2025

GroverGPT-2: Simulating Grover's Algorithm via Chain-of-Thought Reasoning and Quantum-Native Tokenization.
CoRR, May, 2025

Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations.
CoRR, April, 2025

ORAL: Prompting Your Large-Scale LoRAs via Conditional Recurrent Diffusion.
CoRR, March, 2025

Make Optimization Once and for All with Fine-grained Guidance.
CoRR, March, 2025

GroverGPT: A Large Language Model with 8 Billion Parameters for Quantum Searching.
CoRR, January, 2025

Protecting Privacy against Membership Inference Attack with LLM Fine-tuning through Flatness.
Proceedings of the 2025 SIAM International Conference on Data Mining, 2025

Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert Parallelism Design.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025

PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Unveiling Privacy Risks in Multi-modal Large Language Models: Task-specific Vulnerabilities and Mitigation Challenges.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

Vision Language Model Helps Private Information De-Identification in Vision Data.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
HEXA-MoE: Efficient and Heterogeneous-aware MoE Acceleration with ZERO Computation Redundancy.
CoRR, 2024

PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches.
CoRR, 2024

Glider: Global and Local Instruction-Driven Expert Router.
CoRR, 2024

Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark.
CoRR, 2024

Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent.
CoRR, 2024

Privacy-preserving Fine-tuning of Large Language Models through Flatness.
CoRR, 2024

Enhancing Quantum Security over Federated Learning via Post-Quantum Cryptography.
Proceedings of the 5th IEEE International Conference on Trust, 2024

Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy.
Proceedings of the Twelfth International Conference on Learning Representations, 2024


  Loading...