Aaron Mueller

According to our database1, Aaron Mueller authored at least 23 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models.
CoRR, 2024

2023
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax.
CoRR, 2023

Function Vectors in Large Language Models.
CoRR, 2023

Inverse Scaling: When Bigger Isn't Better.
CoRR, 2023

Call for Papers - The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus.
CoRR, 2023

Language model acceptability judgements are not always robust to context.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Meta-training with Demonstration Retrieval for Efficient Few-shot Learning.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

What Do NLP Researchers Believe? Results of the NLP Community Metasurvey.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Bernice: A Multilingual Pre-trained Encoder for Twitter.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models.
Proceedings of the 26th Conference on Computational Natural Language Learning, 2022

Label Semantic Aware Pre-training for Few-shot Text Classification.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022

2021
Demographic Representation and Collective Storytelling in the Me Too Twitter Hashtag Activism Movement.
Proc. ACM Hum. Comput. Interact., 2021

Fine-tuning Encoders for Improved Monolingual and Zero-shot Polylingual Neural Topic Modeling.
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
Decoding Methods for Neural Narrative Generation.
CoRR, 2020

Fine-grained Morphosyntactic Analysis and Generation Tools for More Than One Thousand Languages.
Proceedings of The 12th Language Resources and Evaluation Conference, 2020

An Analysis of Massively Multilingual Neural Machine Translation for Low-Resource Languages.
Proceedings of The 12th Language Resources and Evaluation Conference, 2020

The Johns Hopkins University Bible Corpus: 1600+ Tongues for Typological Exploration.
Proceedings of The 12th Language Resources and Evaluation Conference, 2020

Cross-Linguistic Syntactic Evaluation of Word Prediction Models.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

2019
Quantity doesn't buy quality syntax with neural language models.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

Modeling Color Terminology Across Thousands of Languages.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019


  Loading...