Seonghyeon Ye

According to our database1, Seonghyeon Ye authored at least 16 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models.
CoRR, 2024

Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models.
CoRR, 2023

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets.
CoRR, 2023

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis.
CoRR, 2023

In-Context Instruction Learning.
CoRR, 2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning.
Proceedings of the International Conference on Machine Learning, 2023

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

2022
Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization.
CoRR, 2022

Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts.
Proceedings of the Transfer Learning for Natural Language Processing Workshop, 2022

Towards Continual Knowledge Learning of Language Models.
Proceedings of the Tenth International Conference on Learning Representations, 2022

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Dimensional Emotion Detection from Categorical Emotion.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021


  Loading...