Yohei Oseki

Orcid: 0000-0002-1189-1588

According to our database1, Yohei Oseki authored at least 35 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Derivational Probing: Unveiling the Layer-wise Derivation of Syntactic Structures in Neural Language Models.
CoRR, June, 2025

Massive Supervised Fine-tuning Experiments Reveal How Data, Layer, and Training Factors Shape LLM Alignment Quality.
CoRR, June, 2025

Do LLMs Need to Think in One Language? Correlation between Latent Language and Task Performance.
CoRR, May, 2025

Rethinking the Relationship between the Power Law and Hierarchical Structures.
CoRR, May, 2025

How LLMs Learn: Tracing Internal Representations with Sparse Autoencoders.
CoRR, March, 2025

Can Language Models Learn Typologically Implausible Languages?
CoRR, February, 2025

If Attention Serves as a Cognitive Model of Human Memory Retrieval, What is the Plausible Memory Representation?
CoRR, February, 2025

Large Language Models Are Human-Like Internally.
CoRR, February, 2025

If Attention Serves as a Cognitive Model of Human Memory Retrieval, What is the Plausible Memory Representation?
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Developmentally-plausible Working Memory Shapes a Critical Period for Language Acquisition.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency.
CoRR, 2024

Is Structure Dependence Shaped for Efficient Communication?: A Case Study on Coordination.
CoRR, 2024

LLM-jp: A Cross-organizational Project for the Research and Development of Fully Open Japanese LLMs.
CoRR, 2024

Tree-Planted Transformers: Large Language Models with Implicit Syntactic Supervision.
CoRR, 2024

Psychometric Predictive Power of Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2024, 2024

Can Language Models Induce Grammatical Knowledge from Indirect Evidence?
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Targeted Syntactic Evaluation on the Chomsky Hierarchy.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

JCoLA: Japanese Corpus of Linguistic Acceptability.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Cognitive Information Bottleneck: Extracting Minimal Sufficient Cognitive Language Processing Signals.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Learning Bidirectional Morphological Inflection like Humans.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Tree-Planted Transformers: Unidirectional Transformer Language Models with Implicit Syntactic Supervision.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

Emergent Word Order Universals from Cognitively-Motivated Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

Modeling Overregularization in Children with Small Language Models.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
JBLiMP: Japanese Benchmark of Linguistic Minimal Pairs.
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2023, 2023

How Much Syntactic Supervision is "Good Enough"?
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2023, 2023

2022
What is the role of the next generation of cognitive robotics?
Adv. Robotics, 2022

Formalizing Argument Structures with Combinatory Categorial Grammar.
Proceedings of the Logic and Engineering of Natural Language Semantics, 2022

Composition, Attention, or Both?
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

Context Limitations Make Neural Language Models More Human-Like.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Effective Batching for Recurrent Neural Network Grammars.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

Lower Perplexity is Not Always Human-Like.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

CMCL 2021 Shared Task on Eye-Tracking Prediction.
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2021

2020
Design of BCCWJ-EEG: Balanced Corpus with Human Electroencephalography.
Proceedings of The 12th Language Resources and Evaluation Conference, 2020

2019
Do cross-linguistic patterns of morpheme order reflect a cognitive bias?
Proceedings of the 41th Annual Meeting of the Cognitive Science Society, 2019


  Loading...