Di Wu

Affiliations:
  • University of California, Los Angeles, CA, USA


According to our database1, Di Wu authored at least 11 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Self-Routing RAG: Binding Selective Retrieval with Knowledge Verbalization.
CoRR, April, 2025

BRIEF: Bridging Retrieval and Inference for Multi-hop Reasoning via Compression.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, New Mexico, USA, April 29, 2025

2024
Repoformer: Selective Retrieval for Repository-Level Code Completion.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Synchronous Faithfulness Monitoring for Trustworthy Retrieval-Augmented Generation.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

MetaKP: On-Demand Keyphrase Generation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

On Leveraging Encoder-only Pre-trained Language Models for Effective Keyphrase Generation.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

2023
KPEval: Towards Fine-grained Semantic-based Evaluation of Keyphrase Extraction and Generation Systems.
CoRR, 2023

Rethinking Model Selection and Decoding for Keyphrase Generation with Pre-trained Sequence-to-Sequence Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

2022
Pre-trained Language Models for Keyphrase Generation: A Thorough Empirical Study.
CoRR, 2022

Representation Learning for Resource-Constrained Keyphrase Generation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022


  Loading...