Seonghyeon Ye

According to our database1, Seonghyeon Ye authored at least 32 papers between 2021 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
FLARE: Robot Learning with Implicit World Modeling.
CoRR, May, 2025

DreamGen: Unlocking Generalization in Robot Learning through Neural Trajectories.
CoRR, May, 2025

GR00T N1: An Open Foundation Model for Generalist Humanoid Robots.
CoRR, March, 2025

Magma: A Foundation Model for Multimodal AI Agents.
CoRR, February, 2025

The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025

Latent Action Pretraining from Videos.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Magma: A Foundation Model for Multimodal AI Agents.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

2024
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis.
Trans. Assoc. Comput. Linguistics, 2024

Bridging the Data Provenance Gap Across Text, Speech and Video.
CoRR, 2024

Consent in Crisis: The Rapid Decline of the AI Data Commons.
CoRR, 2024

Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks.
CoRR, 2024

Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards.
CoRR, 2024

INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models.
CoRR, 2024


How Do Large Language Models Acquire Factual Knowledge During Pretraining?
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Carpe diem: On the Evaluation of World Knowledge in Lifelong Language Models.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Instruction Matters: A Simple yet Effective Task Selection for Optimized Instruction Tuning of Specific Tasks.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models.
CoRR, 2023

In-Context Instruction Learning.
CoRR, 2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning.
Proceedings of the International Conference on Machine Learning, 2023

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

2022
Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization.
CoRR, 2022

Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts.
Proceedings of the Transfer Learning for Natural Language Processing Workshop, 2022

Towards Continual Knowledge Learning of Language Models.
Proceedings of the Tenth International Conference on Learning Representations, 2022

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Dimensional Emotion Detection from Categorical Emotion.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021


  Loading...