Peiyu Liu

Orcid: 0000-0002-2974-9184

Affiliations:
  • Renmin University of China, Gaoling School of Artificial Intelligence, China


According to our database1, Peiyu Liu authored at least 12 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Enhancing Parameter-efficient Fine-tuning with Simple Calibration Based on Stable Rank.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Unlocking Data-free Low-bit Quantization with Matrix Decomposition for KV Cache Compression.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
A Survey of Large Language Models.
CoRR, 2023

Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture.
CoRR, 2023

TikTalk: A Multi-Modal Dialogue Dataset for Real-World Chitchat.
CoRR, 2023

TikTalk: A Video-Based Dialogue Dataset for Multi-Modal Chitchat in Real World.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

Enhancing Scalability of Pre-trained Language Models via Efficient Parameter Sharing.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Small Pre-trained Language Models Can be Fine-tuned as Large Models via Over-Parameterization.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

2021
WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training.
CoRR, 2021

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021


  Loading...