Yamini Bansal

According to our database1, Yamini Bansal authored at least 14 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models.
CoRR, 2023

On Privileged and Convergent Bases in Neural Network Representations.
CoRR, 2023

The unreasonable effectiveness of few-shot learning for machine translation.
CoRR, 2023

The Unreasonable Effectiveness of Few-shot Learning for Machine Translation.
Proceedings of the International Conference on Machine Learning, 2023

2022
Limitations of the NTK for Understanding Generalization in Deep Learning.
CoRR, 2022

Data Scaling Laws in NMT: The Effect of Noise and Architecture.
CoRR, 2022

Data Scaling Laws in NMT: The Effect of Noise and Architecture.
Proceedings of the International Conference on Machine Learning, 2022

2021
Revisiting Model Stitching to Compare Neural Representations.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

For self-supervised learning, Rationality implies generalization, provably.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling.
CoRR, 2020

Distributional Generalization: A New Kind of Generalization.
CoRR, 2020

Deep Double Descent: Where Bigger Models and More Data Hurt.
Proceedings of the 8th International Conference on Learning Representations, 2020

2018
Minnorm training: an algorithm for training over-parameterized deep neural networks.
CoRR, 2018

On the Information Bottleneck Theory of Deep Learning.
Proceedings of the 6th International Conference on Learning Representations, 2018


  Loading...