Luohe Shi

According to our database1, Luohe Shi authored at least 6 papers between 2024 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
From Parameters to Performance: A Data-Driven Study on LLM Structure and Development.
CoRR, September, 2025

Faster MoE LLM Inference for Extremely Large Models.
CoRR, May, 2025

SpindleKV: A Novel KV Cache Reduction Method Balancing Both Shallow and Deep Layers.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

KV-Latent: Dimensional-level KV Cache Reduction with Frequency-aware Rotary Positional Embedding.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
Keep the Cost Down: A Review on Methods to Optimize LLM' s KV-Cache Consumption.
CoRR, 2024

Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024


  Loading...