Byung-Doh Oh

According to our database1, Byung-Doh Oh authored at least 12 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times.
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, 2024

2023
Transformer-Based LM Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens.
CoRR, 2023

Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Comparison of Structural Parsers and Neural Language Models as Surprisal Estimators.
Frontiers Artif. Intell., 2022

Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
CoRR, 2022

Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
Character-based PCFG Induction for Modeling the Syntactic Acquisition of Morphologically Rich Languages.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, 2021

Coreference-aware Surprisal Predicts Brain Response.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, 2021

Surprisal Estimators for Human Reading Times Need Character Models.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

Contributions of Propositional Content and Syntactic Category Information in Sentence Processing.
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2021

Team Ohio State at CMCL 2021 Shared Task: Fine-Tuned RoBERTa for Eye-Tracking Data Prediction.
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2021


  Loading...