Tyler A. Chang

According to our database1, Tyler A. Chang authored at least 17 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Language Model Behavior: A Comprehensive Survey.
Comput. Linguistics, March, 2024

Different Tokenization Schemes Lead to Comparable Performance in Spanish Number Agreement.
CoRR, 2024

Detecting Hallucination and Coverage Errors in Retrieval Augmented Generation for Controversial Topics.
CoRR, 2024

A Bit of a Problem: Measurement Disparities in Dataset Sizes Across Languages.
CoRR, 2024

2023
Do Large Language Models Know What Humans Know?
Cogn. Sci., July, 2023

When Is Multilinguality a Curse? Language Modeling for 250 High- and Low-Resource Languages.
CoRR, 2023

Crosslingual Structural Priming and the Pre-Training Dynamics of Bilingual Language Models.
CoRR, 2023

Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability.
CoRR, 2023

Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Characterizing and Measuring Linguistic Dataset Drift.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Word Acquisition in Neural Language Models.
Trans. Assoc. Comput. Linguistics, 2022

The Geometry of Multilingual Language Model Representations.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Distrubutional Semantics Still Can't Account for Affordances.
Proceedings of the 44th Annual Meeting of the Cognitive Science Society, 2022

Does Contextual Diversity Hinder Early Word Acquisition?
Proceedings of the 44th Annual Meeting of the Cognitive Science Society, 2022

2021
Co-Scale Conv-Attentional Image Transformers.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
Encodings of Source Syntax: Similarities in NMT Representations Across Target Languages.
Proceedings of the 5th Workshop on Representation Learning for NLP, 2020


  Loading...