Tal Linzen

Affiliations:
  • New York University, NY, USA


According to our database1, Tal Linzen authored at least 75 papers between 2014 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser.
CoRR, 2024

Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment.
CoRR, 2024

2023
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax.
CoRR, 2023

A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models.
CoRR, 2023

The Impact of Depth and Width on Transformer Language Model Generalization.
CoRR, 2023

Do Language Models Refer?
CoRR, 2023

Language Models Can Learn Exceptions to Syntactic Rules.
CoRR, 2023

A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

SLOG: A Structural Generalization Benchmark for Semantic Parsing.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark.
Trans. Assoc. Comput. Linguistics, 2022

Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models.
CoRR, 2022

When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Improving Compositional Generalization with Latent Structure and Data Augmentation.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

The MultiBERTs: BERT Reproductions for Robustness Analysis.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models.
Proceedings of the 26th Conference on Computational Natural Language Learning, 2022

Entailment Semantics Can Be Extracted from an Ideal Language Model.
Proceedings of the 26th Conference on Computational Natural Language Learning, 2022

Characterizing Verbatim Short-Term Memory in Neural Language Models.
Proceedings of the 26th Conference on Computational Natural Language Learning, 2022

Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities.
Proceedings of the 26th Conference on Computational Natural Language Learning, 2022

Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022

2021
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN.
CoRR, 2021

Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks.
CoRR, 2021

Evaluating Groundedness in Dialogue Systems: The BEGIN Benchmark.
CoRR, 2021

Single-Stage Prediction Models Do Not Explain the Magnitude of Syntactic Disambiguation Difficulty.
Cogn. Sci., 2021

Predicting Inductive Biases of Pre-Trained Models.
Proceedings of the 9th International Conference on Learning Representations, 2021

Frequency Effects on Syntactic Rule Learning in Transformers.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Does Putting a Linguist in the Loop Improve NLU Data Collection?
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, 2021

Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction.
Proceedings of the 25th Conference on Computational Natural Language Learning, 2021

NOPE: A Corpus of Naturally-Occurring Presuppositions in English.
Proceedings of the 25th Conference on Computational Natural Language Learning, 2021

The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation.
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2021

Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks.
Trans. Assoc. Comput. Linguistics, 2020

Syntactic Structure from Deep Learning.
CoRR, 2020

COGS: A Compositional Generalization Challenge Based on Semantic Interpretation.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

Universal linguistic inductive biases via meta-learning.
Proceedings of the 42th Annual Meeting of the Cognitive Science Society, 2020

Neural Language Models Capture Some, But Not All Agreement Attraction Effects.
Proceedings of the 42th Annual Meeting of the Cognitive Science Society, 2020

Discovering the Compositional Structure of Vector Representations with Role Learning Networks.
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2020

BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance.
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2020

Cross-Linguistic Syntactic Evaluation of Word Prediction Models.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

Syntactic Data Augmentation Increases Robustness to Inference Heuristics.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

2019
Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop.
Nat. Lang. Eng., 2019

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension.
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics, 2019

Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages.
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019

RNNs implicitly implement tensor-product representations.
Proceedings of the 7th International Conference on Learning Representations, 2019

Quantity doesn't buy quality syntax with neural language models.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models.
Proceedings of the 23rd Conference on Computational Natural Language Learning, 2019

How much harder are hard garden-path sentences than easy ones?
Proceedings of the 41th Annual Meeting of the Cognitive Science Society, 2019

Human few-shot learning of compositional instructions.
Proceedings of the 41th Annual Meeting of the Cognitive Science Society, 2019

Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

2018
Non-entailed subsequences as a challenge for natural language inference.
CoRR, 2018

Can Entropy Explain Successor Surprisal Effects in Reading?
CoRR, 2018

What can linguistics and deep learning contribute to each other?
CoRR, 2018

Colorless Green Recurrent Networks Dream Hierarchically.
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018

A Neural Model of Adaptation in Reading.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31, 2018

Targeted Syntactic Evaluation of Language Models.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31, 2018

Modeling garden path effects without explicit hierarchical syntax.
Proceedings of the 40th Annual Meeting of the Cognitive Science Society, 2018

Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks.
Proceedings of the 40th Annual Meeting of the Cognitive Science Society, 2018

Distinct patterns of syntactic agreement errors in recurrent networks and humans.
Proceedings of the 40th Annual Meeting of the Cognitive Science Society, 2018

Phonological (un)certainty weights lexical activation.
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics, 2018

2017
Comparing Character-level Neural Language Models Using a Lexical Decision Task.
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 2017

Exploring the Syntactic Abilities of RNNs with Multi-task Learning.
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), 2017

Prediction and uncertainty in an artificial language.
Proceedings of the 39th Annual Meeting of the Cognitive Science Society, 2017

2016
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies.
Trans. Assoc. Comput. Linguistics, 2016

Uncertainty and Expectation in Sentence Processing: Evidence From Subcategorization Distributions.
Cogn. Sci., 2016

Quantificational features in distributional word representations.
Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, 2016

Issues in evaluating semantic spaces using word analogies.
Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, 2016

Evaluating vector space models using human semantic priming results.
Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, 2016

2015
Lexical Preactivation in Basic Linguistic Phrases.
J. Cogn. Neurosci., 2015

A model of rapid phonotactic generalization.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015

2014
The timecourse of phonotactic learning.
Proceedings of the 36th Annual Meeting of the Cognitive Science Society, 2014

Investigating the role of entropy in sentence processing.
Proceedings of the Fifth Workshop on Cognitive Modeling and Computational Linguistics, 2014


  Loading...