Tom McCoy

Affiliations:
  • Princeton University, NJ, USA
  • Johns Hopkins University, Baltimore, MD, USA (former)
  • Yale University, New Haven, CT, USA (former)


According to our database1, Tom McCoy authored at least 27 papers between 2017 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Distilling Symbolic Priors for Concept Learning into Neural Networks.
CoRR, 2024

2023
Deep de Finetti: Recovering Topic Distributions from Large Language Models.
CoRR, 2023

Bayes in the age of intelligent machines.
CoRR, 2023

Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve.
CoRR, 2023

Modeling rapid language learning by distilling Bayesian priors into artificial neural networks.
CoRR, 2023

How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages.
CoRR, 2022

Neurocompositional Computing: From the Central Paradox of Cognition to a New Generation of AI Systems.
AI Mag., 2022

2021
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN.
CoRR, 2021

2020
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks.
Trans. Assoc. Comput. Linguistics, 2020

Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis.
Proceedings of the 28th International Conference on Computational Linguistics, 2020

Universal linguistic inductive biases via meta-learning.
Proceedings of the 42th Annual Meeting of the Cognitive Science Society, 2020

Discovering the Compositional Structure of Vector Representations with Role Learning Networks.
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2020

BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance.
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2020

Syntactic Data Augmentation Increases Robustness to Inference Heuristics.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

2019
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension.
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics, 2019

What do you learn from context? Probing for sentence structure in contextualized word representations.
Proceedings of the 7th International Conference on Learning Representations, 2019

RNNs implicitly implement tensor-product representations.
Proceedings of the 7th International Conference on Learning Representations, 2019

Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

2018
Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling.
CoRR, 2018

Non-entailed subsequences as a challenge for natural language inference.
CoRR, 2018

Parser combinators for Tigrinya and Oromo morphology.
Proceedings of the Eleventh International Conference on Language Resources and Evaluation, 2018

Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks.
Proceedings of the 40th Annual Meeting of the Cognitive Science Society, 2018

2017
Linguistically Rich Vector Representations of Supertags for TAG Parsing.
Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms, 2017

TAG Parsing with Neural Networks and Vector Representations of Supertags.
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017


  Loading...