Chanhee Lee

Orcid: 0000-0002-6406-5167

Affiliations:
  • NAVER Corporation, Korea
  • Korea University, Department of Computer Science and Engineering, Natural Language Processing and Artificial Intelligence Laboratory, Seoul, Korea


According to our database1, Chanhee Lee authored at least 16 papers between 2018 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
An analysis on language transfer of pre-trained language model with cross-lingual post-training.
Expert Syst. Appl., 2025

2024
Hyper-BTS Dataset: Scalability and Enhanced Analysis of Back TranScription (BTS) for ASR Post-Processing.
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2024, 2024

2023
Towards Reliable and Fluent Large Language Models: Incorporating Feedback Learning Loops in QA Systems.
CoRR, 2023

2022
Analysis of Utterance Embeddings and Clustering Methods Related to Intent Induction for Task-Oriented Dialogue.
CoRR, 2022

Language Chameleon: Transformation analysis between languages using Cross-lingual Post-training based on Pre-trained language models.
CoRR, 2022

Multimodal Frame-Scoring Transformer for Video Summarization.
CoRR, 2022

Plain Template Insertion: Korean-Prompt-Based Engineering for Few-Shot Learners.
IEEE Access, 2022

2021
BTS: Back TranScription for Speech-to-Text Post-Processor using Text-to-Speech-to-Text.
Proceedings of the 8th Workshop on Asian Translation, 2021

2020
Automatic extraction of named entities of cyber threats using a deep Bi-LSTM-CRF network.
Int. J. Mach. Learn. Cybern., 2020

Variational Reward Estimator Bottleneck: Learning Robust Reward Estimator for Multi-Domain Task-Oriented Dialog.
CoRR, 2020

Comparison of the Evaluation Metrics for Neural Grammatical Error Correction With Overcorrection.
IEEE Access, 2020

Ancient Korean Neural Machine Translation.
IEEE Access, 2020

An Effective Domain Adaptive Post-Training Method for BERT in Response Selection.
Proceedings of the 21st Annual Conference of the International Speech Communication Association, 2020

2019
Domain Adaptive Training BERT for Response Selection.
CoRR, 2019

2018
Rich Character-Level Information for Korean Morphological Analysis and Part-of-Speech Tagging.
Proceedings of the 27th International Conference on Computational Linguistics, 2018

Character-Level Feature Extraction with Densely Connected Networks.
Proceedings of the 27th International Conference on Computational Linguistics, 2018


  Loading...