Amit Daniely

Affiliations:
  • The Hebrew University of Jerusalem, Israel


According to our database1, Amit Daniely authored at least 59 papers between 2010 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of two.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
RedEx: Beyond Fixed Representation Methods via Convex Optimization.
CoRR, 2024

2023
Locally Optimal Descent for Dynamic Stepsize Scheduling.
CoRR, 2023

Efficiently Learning Neural Networks: What Assumptions May Suffice?
CoRR, 2023

Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Most Neural Networks Are Almost Learnable.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Multiclass Boosting: Simple and Intuitive Weak Learning Criteria.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

An Exact Poly-Time Membership-Queries Algorithm for Extracting a Three-Layer ReLU Network.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
On the Sample Complexity of Two-Layer Networks: Lipschitz vs. Element-Wise Lipschitz Activation.
CoRR, 2022

Approximate Description Length, Covering Numbers, and VC Dimension.
CoRR, 2022

Monotone Learning.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

2021
An Exact Poly-Time Membership-Queries Algorithm for Extraction a three-Layer ReLU Network.
CoRR, 2021

Asynchronous Stochastic Optimization Robust to Arbitrary Delays.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

From Local Pseudorandom Generators to Hardness of Learning.
Proceedings of the Conference on Learning Theory, 2021

2020
Most ReLU Networks Suffer from 𝓁<sup>2</sup> Adversarial Perturbations.
CoRR, 2020

Memorizing Gaussians with no over-parameterizaion via gradient decent on neural networks.
CoRR, 2020

On the Complexity of Minimizing Convex Finite Sums Without Using the Indices of the Individual Functions.
CoRR, 2020

Hardness of Learning Neural Networks with Natural Weights.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Learning Parities with Neural Networks.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Neural Networks Learning and Memorization with (almost) no Over-Parameterization.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

The Implicit Bias of Depth: How Incremental Learning Drives Generalization.
Proceedings of the 8th International Conference on Learning Representations, 2020

ID3 Learns Juntas for Smoothed Product Distributions.
Proceedings of the Conference on Learning Theory, 2020

Distribution Free Learning with Local Queries.
Proceedings of the Algorithmic Learning Theory, 2020

2019
On the Optimality of Trees Generated by ID3.
CoRR, 2019

Competitive ratio versus regret minimization: achieving the best of both worlds.
CoRR, 2019

Generalization Bounds for Neural Networks via Approximate Description Length.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Locally Private Learning without Interaction Requires Separation.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Open Problem: Is Margin Sufficient for Non-Interactive Private Distributed Learning?
Proceedings of the Conference on Learning Theory, 2019

Competitive ratio vs regret minimization: achieving the best of both worlds.
Proceedings of the Algorithmic Learning Theory, 2019

Learning Rules-First Classifiers.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Inapproximability of Truthful Mechanisms via Generalizations of the Vapnik-Chervonenkis Dimension.
SIAM J. Comput., 2018

Learning without Interaction Requires Separation.
CoRR, 2018

Planning and Learning with Stochastic Action Sets.
CoRR, 2018

Learning with Rules.
CoRR, 2018

2017
Random Features for Compositional Kernels.
CoRR, 2017

SGD Learns the Conjugate Kernel Class of the Network.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

Short and Deep: Sketching and Neural Networks.
Proceedings of the 5th International Conference on Learning Representations, 2017

Depth Separation for Neural Networks.
Proceedings of the 30th Conference on Learning Theory, 2017

2016
Behavior-Based Machine-Learning: A Hybrid Approach for Predicting Human Decision Making.
CoRR, 2016

Sketching and Neural Networks.
CoRR, 2016

Complexity theoretic limitations on learning halfspaces.
Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, 2016

Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

Complexity Theoretic Limitations on Learning DNF's.
Proceedings of the 29th Conference on Learning Theory, 2016

2015
Multiclass learnability and the ERM principle.
J. Mach. Learn. Res., 2015

Inapproximability of Truthful Mechanisms via Generalizations of the VC Dimension.
Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, 2015

Strongly Adaptive Online Learning.
Proceedings of the 32nd International Conference on Machine Learning, 2015

A PTAS for Agnostically Learning Halfspaces.
Proceedings of The 28th Conference on Learning Theory, 2015

2014
Learning Economic Parameters from Revealed Preferences.
Proceedings of the Web and Internet Economics - 10th International Conference, 2014

From average case complexity to improper learning complexity.
Proceedings of the Symposium on Theory of Computing, 2014

Optimal learners for multiclass problems.
Proceedings of The 27th Conference on Learning Theory, 2014

The Complexity of Learning Halfspaces using Generalized Linear Methods.
Proceedings of The 27th Conference on Learning Theory, 2014

2013
On the practically interesting instances of MAXCUT.
Proceedings of the 30th International Symposium on Theoretical Aspects of Computer Science, 2013

More data speeds up training time in learning halfspaces over sparse vectors.
Proceedings of the Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013

The price of bandit information in multiclass online classification.
Proceedings of the COLT 2013, 2013

2012
Tight products and graph expansion.
J. Graph Theory, 2012

The error rate of learning halfspaces using Kernel-SVMs
CoRR, 2012

Clustering is difficult only when it does not matter
CoRR, 2012

Multiclass Learning Approaches: A Theoretical Comparison with Implications.
Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012

2010
Tight products and Expansion
CoRR, 2010


  Loading...