Stefan L. Frank

Orcid: 0000-0002-7026-711X

According to our database1, Stefan L. Frank authored at least 38 papers between 2003 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
An eye-tracking-with-EEG coregistration corpus of narrative sentences.
Lang. Resour. Evaluation, June, 2024

2023
Modelling Human Word Learning and Recognition Using Visually Grounded Speech.
Cogn. Comput., January, 2023

2022
Unsupervised Text Segmentation Predicts Eye Fixations During Reading.
Frontiers Artif. Intell., 2022

Modelling word learning and recognition using visually grounded speech.
CoRR, 2022

Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge.
CoRR, 2022

Bilingual Sentence Processing: when Models Meet Experiments.
Proceedings of the 44th Annual Meeting of the Cognitive Science Society, 2022

2021
Semantic Sentence Similarity: Size does not Always Matter.
Proceedings of the 22nd Annual Conference of the International Speech Communication Association, Interspeech 2021, Brno, Czechia, August 30, 2021

Cross-language structural priming in recurrent neural network language models.
Proceedings of the 43rd Annual Meeting of the Cognitive Science Society, 2021

Human Sentence Processing: Recurrence or Attention?
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2021

2020
Comparing Transformers and RNNs on predicting human sentence processing data.
CoRR, 2020

Modeling the Influence of Language Input Statistics on Children's Speech Production.
Cogn. Sci., 2020

2019
Learning semantic sentence representations from visually grounded language without lexical knowledge.
Nat. Lang. Eng., 2019

Language Learning Using Speech to Image Retrieval.
Proceedings of the 20th Annual Conference of the International Speech Communication Association, 2019

The interaction between structure and meaning in sentence comprehension: Recurrent neural networks and reading times.
Proceedings of the 41th Annual Meeting of the Cognitive Science Society, 2019

Comparing Gated and Simple Recurrent Neural Network Architectures as Models of Human Sentence Processing.
Proceedings of the 41th Annual Meeting of the Cognitive Science Society, 2019

2018
Modeling the Structure and Dynamics of Semantic Processing.
Cogn. Sci., 2018

2017
Lexical representation explains cortical entrainment during speech perception.
CoRR, 2017

"He's pregnant": simulating the confusing case of gender pronoun errors in L2.
Proceedings of the 39th Annual Meeting of the Cognitive Science Society, 2017

Non-syntactic Processing Explains Cortical Entrainment During Speech Perception.
Proceedings of the 39th Annual Meeting of the Cognitive Science Society, 2017

Word Embedding Distance Does not Predict Word Reading Time.
Proceedings of the 39th Annual Meeting of the Cognitive Science Society, 2017

Data-Driven Broad-Coverage Grammars for Opinionated Natural Language Generation (ONLG).
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017

2016
Cross-Linguistic Differences in Processing Double-Embedded Relative Clauses: Working-Memory Constraints or Language Statistics?
Cogn. Sci., 2016

From Words to Behaviour via Semantic Networks.
Proceedings of the 38th Annual Meeting of the Cognitive Science Society, 2016

Statistical learning bias predicts second-language reading efficiency.
Proceedings of the 38th Annual Meeting of the Cognitive Science Society, 2016

2014
Reconciling Embodied and Distributional Accounts of Meaning in Language.
Top. Cogn. Sci., 2014

Modelling Reading Times in Bilingual Sentence Comprehension.
Proceedings of the 36th Annual Meeting of the Cognitive Science Society, 2014

2013
Uncertainty Reduction as a Measure of Cognitive Load in Sentence Comprehension.
Top. Cogn. Sci., 2013

Word surprisal predicts N400 amplitude during reading.
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, 2013

2012
Lexical surprisal as a general predictor of reading time.
Proceedings of the EACL 2012, 2012

Early effects of word surprisal on pupil size during reading.
Proceedings of the 34th Annual Meeting of the Cognitive Science Society, 2012

2011
Sentence Comprehension as Mental Simulation: An Information-Theoretic Perspective.
Inf., 2011

Do information-theoretic measures of word-processing difficulty explain psycholinguistic phenomena?
Proceedings of the 33th Annual Meeting of the Cognitive Science Society, 2011

2010
Sentence-processing in echo state networks: a qualitative analysis by finite state machine extraction.
Connect. Sci., 2010

2007
<i>From Molecule to Metaphor: A Neural Theory of Language</i> Jerome A. Feldman (University of California, Berkeley) Cambridge, MA: The MIT Press (A Bradford book), 2006, xx+357 pp; hardbound, ISBN 0-262-06253-4.
Comput. Linguistics, 2007

Automated Abstraction of Dynamic Neural Systems for Natural Language Processing.
Proceedings of the International Joint Conference on Neural Networks, 2007

2006
Learn more by training less: systematicity in sentence processing by recurrent networks.
Connect. Sci., 2006

Strong Systematicity in Sentence Processing by an Echo State Network.
Proceedings of the Artificial Neural Networks, 2006

2003
Modeling knowledge-based inferences in story comprehension.
Cogn. Sci., 2003


  Loading...