Preetum Nakkiran

According to our database1, Preetum Nakkiran authored at least 43 papers between 2014 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Loss Minimization Yields Multicalibration for Large Neural Networks.
Proceedings of the 15th Innovations in Theoretical Computer Science Conference, 2024

2023
Perspectives on the State and Future of Deep Learning - 2023.
CoRR, 2023

LiDAR: Sensing Linear Probing Performance in Joint Embedding SSL Architectures.
CoRR, 2023

Vanishing Gradients in Reinforcement Finetuning of Language Models.
CoRR, 2023

What Algorithms can Transformers Learn? A Study in Length Generalization.
CoRR, 2023

Smooth ECE: Principled Reliability Diagrams via Kernel Smoothing.
CoRR, 2023

A Unifying Theory of Distance from Calibration.
Proceedings of the 55th Annual ACM Symposium on Theory of Computing, 2023

When Does Optimizing a Proper Loss Yield Calibration?
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Deconstructing Distributions: A Pointwise Framework of Learning.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Near-Optimal NP-Hardness of Approximating Max k-CSP<sub>R</sub>.
Theory Comput., 2022

General Strong Polarization.
J. ACM, 2022

APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations.
CoRR, 2022

The Calibration Generalization Gap.
CoRR, 2022

Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting.
CoRR, 2022

Limitations of the NTK for Understanding Generalization in Deep Learning.
CoRR, 2022

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning.
CoRR, 2022

Limitations of Neural Collapse for Understanding Generalization in Deep Learning.
CoRR, 2022

Benign, Tempered, or Catastrophic: Toward a Refined Taxonomy of Overfitting.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

What You See is What You Get: Principled Deep Learning via Distributional Generalization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Knowledge Distillation: Bad Models Can Be Good Role Models.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
Turing-Universal Learners with Optimal Scaling Laws.
CoRR, 2021

Revisiting Model Stitching to Compare Neural Representations.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Optimal Regularization can Mitigate Double Descent.
Proceedings of the 9th International Conference on Learning Representations, 2021

The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
The Deep Bootstrap: Good Online Learners are Good Offline Generalizers.
CoRR, 2020

Distributional Generalization: A New Kind of Generalization.
CoRR, 2020

Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems.
CoRR, 2020

Deep Double Descent: Where Bigger Models and More Data Hurt.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
More Data Can Hurt for Linear Regression: Sample-wise Double Descent.
CoRR, 2019

SGD on Neural Networks Learns Functions of Increasing Complexity.
CoRR, 2019

Adversarial Robustness May Be at Odds With Simplicity.
CoRR, 2019

SGD on Neural Networks Learns Functions of Increasing Complexity.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Algorithmic Polarization for Hidden Markov Models.
Proceedings of the 10th Innovations in Theoretical Computer Science Conference, 2019

Computational Limitations in Robust Classification and Win-Win Results.
Proceedings of the Conference on Learning Theory, 2019

Tracking the l<sub>2</sub> Norm with Constant Update Time.
Proceedings of the Approximation, 2019

2018
The Generic Holdout: Preventing False-Discoveries in Adaptive Data Science.
CoRR, 2018

2017
Predicting Positive and Negative Links with Noisy Queries: Theory & Practice.
CoRR, 2017

2016
Optimal systematic distributed storage codes with fast encoding.
Proceedings of the IEEE International Symposium on Information Theory, 2016

Near-Optimal UGC-hardness of Approximating Max k-CSP_R.
Proceedings of the Approximation, 2016

2015
Compressing deep neural networks using a rank-constrained topology.
Proceedings of the INTERSPEECH 2015, 2015

Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks.
Proceedings of the 2015 IEEE International Conference on Acoustics, 2015

Having Your Cake and Eating It Too: Jointly Optimal Erasure Codes for I/O, Storage, and Network-bandwidth.
Proceedings of the 13th USENIX Conference on File and Storage Technologies, 2015

2014
Fundamental limits on communication for oblivious updates in storage networks.
Proceedings of the IEEE Global Communications Conference, 2014


  Loading...