Andrew K. Lampinen

Orcid: 0000-0002-6988-8437

Affiliations:
  • DeepMind
  • Stanford University, Department of Psychology, CA, USA (former)


According to our database1, Andrew K. Lampinen authored at least 33 papers between 2017 and 2023.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
SODA: Bottleneck Diffusion Models for Representation Learning.
CoRR, 2023

Evaluating Spatial Understanding of Large Language Models.
CoRR, 2023

Getting aligned on representational alignment.
CoRR, 2023

Improving neural network representations using human similarity judgments.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Passive learning of active causal strategies in agents and language models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Combining Behaviors with the Successor Features Keyboard.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Symbol tuning improves in-context learning in language models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Know your audience: specializing grounded language models with listener subtraction.
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023

2022
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans.
CoRR, 2022

Transformers generalize differently from information stored in context vs in weights.
CoRR, 2022

Language models show human-like content effects on reasoning.
CoRR, 2022

Know your audience: specializing grounded language models with the game of Dixit.
CoRR, 2022

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models.
CoRR, 2022

Semantic Exploration from Language Abstractions and Pretrained Representations.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Data Distributional Properties Drive Emergent In-Context Learning in Transformers.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Tell me why! Explanations support learning relational and causal structure.
Proceedings of the International Conference on Machine Learning, 2022

Can language models learn from explanations in context?
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

Zipfian Environments for Reinforcement Learning.
Proceedings of the Conference on Lifelong Learning Agents, 2022

2021
Feature-Attending Recurrent Modules for Generalization in Reinforcement Learning.
CoRR, 2021

Symbolic Behaviour in Artificial Intelligence.
CoRR, 2021

Towards mental time travel: a hierarchical memory for reinforcement learning agents.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
Transforming task representations to perform novel tasks.
Proc. Natl. Acad. Sci. USA, 2020

Transforming task representations to allow deep learning models to perform novel tasks.
CoRR, 2020

What shapes feature representations? Exploring datasets, architectures, and training.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Automated curriculum generation through setter-solver interactions.
Proceedings of the 8th International Conference on Learning Representations, 2020

Environmental drivers of systematicity and generalization in a situated agent.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
Emergent Systematic Generalization in a Situated Agent.
CoRR, 2019

Automated curricula through setter-solver interactions.
CoRR, 2019

Embedded Meta-Learning: Toward more flexible deep-learning models.
CoRR, 2019

An analytic theory of generalization dynamics and transfer learning in deep linear networks.
Proceedings of the 7th International Conference on Learning Representations, 2019

2017
One-shot and few-shot learning of word embeddings.
CoRR, 2017

Improving image generative models with human interactions.
CoRR, 2017

Analogies Emerge from Learning Dyamics in Neural Networks.
Proceedings of the 39th Annual Meeting of the Cognitive Science Society, 2017


  Loading...