Róbert Csordás

According to our database1, Róbert Csordás authored at least 18 papers between 2015 and 2023.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention.
CoRR, 2023

Automating Continual Learning.
CoRR, 2023

Mindstorms in Natural Language-Based Societies of Mind.
CoRR, 2023

Topological Neural Discrete Representation Learning à la Kohonen.
CoRR, 2023

Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Approximating Two-Layer Feedforward Networks for Efficient Transformers.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Randomized Positional Encodings Boost Length Generalization of Transformers.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2023

2022

A Modern Self-Referential Weight Matrix That Learns to Modify Itself.
Proceedings of the International Conference on Machine Learning, 2022

The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention.
Proceedings of the International Conference on Machine Learning, 2022

The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization.
Proceedings of the Tenth International Conference on Learning Representations, 2022

CTL++: Evaluating Generalization on Never-Seen Compositional Patterns of Known Functions, and Compatibility of Neural Representations.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
Improving Baselines in the Wild.
CoRR, 2021

Going Beyond Linear Transformers with Recurrent Fast Weight Programmers.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks.
Proceedings of the 9th International Conference on Learning Representations, 2021

The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

2019
Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control.
Proceedings of the 7th International Conference on Learning Representations, 2019

2015
Detecting Objects Thrown over Fence in Outdoor Scenes.
Proceedings of the VISAPP 2015, 2015


  Loading...