Michael Hahn

Orcid: 0000-0003-4828-4834

Affiliations:
  • Saarland University, Department of Language Science and Technology, Saarbrücken, Germany
  • Stanford University, Linguistics Department, CA, USA (former, PhD 2022)
  • University of Edinburgh, School of Informatics, UK (former)
  • University of Tübingen, Germany (former)


According to our database1, Michael Hahn authored at least 21 papers between 2011 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Why are Sensitive Functions Hard for Transformers?
CoRR, 2024

2023
A Cross-Linguistic Pressure for Uniform Information Density in Word Order.
CoRR, 2023

A Theory of Emergent In-Context Learning as Implicit Structure Induction.
CoRR, 2023

2022
Crosslinguistic word order variation reflects evolutionary pressures of dependency and information locality.
CoRR, 2022

2021
Sensitivity as a Complexity Measure for Sequence Classification Tasks.
Trans. Assoc. Comput. Linguistics, 2021

An Information-Theoretic Characterization of Morphological Fusion.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

2020
Theoretical Limitations of Self-Attention in Neural Sequence Models.
Trans. Assoc. Comput. Linguistics, 2020

RNNs can generate bounded hierarchical languages with optimal memory.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

2019
Tabula nearly rasa: Probing the linguistic knowledge of character-level neural language models trained on unsegmented text.
Trans. Assoc. Comput. Linguistics, 2019

Estimating Predictive Rate-Distortion Curves via Neural Variational Inference.
Entropy, 2019

Character-based Surprisal as a Model of Human Reading in the Presence of Errors.
CoRR, 2019

Character-based Surprisal as a Model of Reading Difficulty in the Presence of Errors.
Proceedings of the 41th Annual Meeting of the Cognitive Science Society, 2019

2018
Modeling Task Effects in Human Reading with Neural Attention.
CoRR, 2018

Wreath Products of Distributive Forest Algebras.
Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, 2018

An Information-Theoretic Explanation of Adjective Ordering Preferences.
Proceedings of the 40th Annual Meeting of the Cognitive Science Society, 2018

2016
Modeling Human Reading with Neural Attention.
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016

2015
Henkin semantics for reasoning with natural language.
J. Lang. Model., 2015

Visibly Counter Languages and the Structure of NC<sup>1</sup>.
Proceedings of the Mathematical Foundations of Computer Science 2015, 2015

2013
CoMeT: Integrating different levels of linguistic modeling for meaning assessment.
Proceedings of the 7th International Workshop on Semantic Evaluation, 2013

2012
Evaluating the Meaning of Answers to Reading Comprehension Questions: A Semantics-Based Approach.
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, 2012

2011
On deriving semantic representations from dependencies: A Practical approach for evaluating meaning in learner corpora.
Proceedings of the Computational Dependency Theory [papers from the International Conference on Dependency Linguistics, 2011


  Loading...