Miles Williams

According to our database1, Miles Williams authored at least 7 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Compressing Language Models for Specialized Domains.
CoRR, February, 2025

Self-calibration for Language Model Quantization and Pruning.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025

2024
Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization.
Trans. Assoc. Comput. Linguistics, 2024

On the Impact of Calibration Data in Post-training Quantization and Pruning.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
How Does Calibration Data Affect the Post-training Pruning and Quantization of Large Language Models?
CoRR, 2023

Lighter, yet More Faithful: Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization.
CoRR, 2023

Frustratingly Simple Memory Efficiency for Pre-trained Language Models via Dynamic Embedding Pruning.
CoRR, 2023


  Loading...