Ercong Nie

Orcid: 0000-0003-1453-4460

According to our database1, Ercong Nie authored at least 31 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Beyond Over-Refusal: Scenario-Based Diagnostics and Post-Hoc Mitigation for Exaggerated Refusals in LLMs.
CoRR, October, 2025

Query Expansion in the Age of Pre-trained and Large Language Models: A Comprehensive Survey.
CoRR, September, 2025

A Survey of Long-Document Retrieval in the PLM and LLM Era.
CoRR, September, 2025

Memory-R1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning.
CoRR, August, 2025

CoDAE: Adapting Large Language Models for Education via Chain-of-Thought Data Augmentation.
CoRR, August, 2025

Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models.
CoRR, June, 2025

XToM: Exploring the Multilingual Theory of Mind for Large Language Models.
CoRR, June, 2025

LLM in the Loop: Creating the ParaDeHate Dataset for Hate Speech Detoxification.
CoRR, June, 2025

Look Within or Look Beyond? A Theoretical Comparison Between Parameter-Efficient and Full Fine-Tuning.
CoRR, May, 2025

Mechanistic Understanding and Mitigation of Language Confusion in English-Centric Large Language Models.
CoRR, May, 2025

Tracing Multilingual Factual Knowledge Acquisition in Pretraining.
CoRR, May, 2025

XCOMPS: A Multilingual Benchmark of Conceptual Minimal Pairs.
CoRR, February, 2025

Language Model Re-rankers are Steered by Lexical Similarities.
CoRR, February, 2025

Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers.
Proceedings of the International Joint Conference on Neural Networks, 2025

BMIKE-53: Investigating Cross-Lingual Knowledge Editing with In-Context Learning.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Large Language Models as Neurolinguistic Subjects: Discrepancy between Performance and Competence.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
Large Language Models as Neurolinguistic Subjects: Identifying Internal Representations for Form and Meaning.
CoRR, 2024

Decomposed Prompting: Unveiling Multilingual Linguistic Structure Knowledge in English-Centric Large Language Models.
CoRR, 2024

Team MGTD4ADL at SemEval-2024 Task 8: Leveraging (Sentence) Transformer Models with Contrastive Learning for Identifying Machine-Generated Text.
Proceedings of the 18th International Workshop on Semantic Evaluation, 2024

A Unified Data Augmentation Framework for Low-Resource Multi-domain Dialogue Generation.
Proceedings of the Machine Learning and Knowledge Discovery in Databases. Research Track, 2024

ToPro: Token-Level Prompt Decomposition for Cross-Lingual Sequence Labeling Tasks.
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, 2024

Decoding Probing: Revealing Internal Linguistic Structures in Neural Language Models Using Minimal Pairs.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL.
CoRR, 2023

Crosslingual Retrieval Augmented In-context Learning for Bangla.
CoRR, 2023

Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models.
CoRR, 2023

Cross-Lingual Constituency Parsing for Middle High German: A Delexicalized Approach.
Proceedings of the Ancient Language Processing Workshop, 2023

Is Prompt-Based Finetuning Always Better than Vanilla Finetuning? Insights from Cross-Lingual Language Understanding.
Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023), 2023

Unleashing the Multilingual Encoder Potential: Boosting Zero-Shot Performance via Probability Calibration.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Cross-Lingual Retrieval Augmented Prompt for Low-Resource Languages.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023


  Loading...