Qingyu Tan

According to our database1, Qingyu Tan authored at least 13 papers between 2020 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
SeaLLMs - Large Language Models for Southeast Asia.
CoRR, 2023

Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning.
CoRR, 2023

Unlocking Temporal Question Answering for Large Language Models Using Code Execution.
CoRR, 2023

Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
Revisiting DocRED - Addressing the Overlooked False Negative Problem in Relation Extraction.
CoRR, 2022

Revisiting DocRED - Addressing the False Negative Problem in Relation Extraction.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Domain Generalization for Text Classification with Memory-Based Supervised Contrastive Learning.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022

2021
Swarm-Based 4D Path Planning For Drone Operations in Urban Environments.
IEEE Trans. Veh. Technol., 2021

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
Feature Adaptation of Pre-Trained Language Models across Languages and Domains for Text Classification.
CoRR, 2020

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020


  Loading...