Shaochen Zhong

Orcid: 0009-0001-7289-3667

Affiliations:
  • Rice University, Department of Computer Science, Houston, TX, USA


According to our database1, Shaochen Zhong authored at least 27 papers between 2022 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Data-centric Artificial Intelligence: A Survey.
ACM Comput. Surv., May, 2025

AutoL2S: Auto Long-Short Reasoning for Efficient Large Language Models.
CoRR, May, 2025

100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?
CoRR, May, 2025

70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float.
CoRR, April, 2025

Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models.
CoRR, March, 2025

More for Keys, Less for Values: Adaptive KV Cache Quantization.
CoRR, February, 2025

Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models.
Trans. Mach. Learn. Res., 2025

In-Progress: Structured Pruning in the Wild: Benchmarking Practical Robustness Under Real-World Corruptions.
Proceedings of the 2025 IEEE Security and Privacy, 2025

MQuAKE-Remastered: Multi-Hop Knowledge Editing Can Only Be Advanced with Reliable Evaluations.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Flexible Group Count Enables Hassle-Free Structured Pruning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond.
ACM Trans. Knowl. Discov. Data, July, 2024

Retrieval-Enhanced Knowledge Editing for Multi-Hop Question Answering in Language Models.
CoRR, 2024

LoRATK: LoRA Once, Backdoor Everywhere in the Share-and-Play Ecosystem.
CoRR, 2024

GNNs Also Deserve Editing, and They Need It More Than Once.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Soft Prompt Recovers Compressed LLMs, Transferably.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

TVE: Learning Meta-attribution for Transferable Vision Explainer.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Knowledge Graphs Can be Learned with Just Intersection Features.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Retrieval-enhanced Knowledge Editing in Language Models for Multi-Hop Question Answering.
Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, 2024

2023
LETA: Learning Transferable Attribution for Generic Vision Explainer.
CoRR, 2023

Editable Graph Neural Network for Node Classifications.
CoRR, 2023

One Less Reason for Filter Pruning: Gaining Free Adversarial Robustness with Structured Grouped Kernel Pruning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions.
Proceedings of the Tenth International Conference on Learning Representations, 2022


  Loading...