Michal Derezinski

Orcid: 0009-0001-7274-539X

According to our database1, Michal Derezinski authored at least 49 papers between 2014 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Sharp Analysis of Sketch-and-Project Methods via a Connection to Randomized Singular Value Decomposition.
SIAM J. Math. Data Sci., March, 2024

Stochastic Newton Proximal Extragradient Method.
CoRR, 2024

Faster Linear Systems and Matrix Norm Approximation via Multi-level Sketched Preconditioning.
CoRR, 2024

Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems.
CoRR, 2024

Distributed Least Squares in Small Space via Sketching and Bias Reduction.
CoRR, 2024

Second-order Information Promotes Mini-Batch Robustness in Variance-Reduced Gradients.
CoRR, 2024

HERTA: A High-Efficiency and Rigorous Training Algorithm for Unfolded Graph Neural Networks.
CoRR, 2024

Solving Dense Linear Systems Faster Than via Preconditioning.
Proceedings of the 56th Annual ACM Symposium on Theory of Computing, 2024

Optimal Embedding Dimension for Sparse Subspace Embeddings.
Proceedings of the 56th Annual ACM Symposium on Theory of Computing, 2024

Recent and Upcoming Developments in Randomized Numerical Linear Algebra for Machine Learning.
Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024

2023
Hessian averaging in stochastic Newton methods achieves superlinear convergence.
Math. Program., 2023

Surrogate-based Autotuning for Randomized Sketching Algorithms in Regression Problems.
CoRR, 2023

Randomized Numerical Linear Algebra : A Perspective on the Field With an Eye to Software.
CoRR, 2023

Algorithmic Gaussianization through Sketching: Converting Data into Sub-gaussian Random Designs.
Proceedings of the Thirty Sixth Annual Conference on Learning Theory, 2023

2022
Unbiased estimators for random design regression.
J. Mach. Learn. Res., 2022

Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches.
CoRR, 2022

Domain Sparsification of Discrete Distributions Using Entropic Independence.
Proceedings of the 13th Innovations in Theoretical Computer Science Conference, 2022

2021
LocalNewton: Reducing Communication Bottleneck for Distributed Learning.
CoRR, 2021

LocalNewton: Reducing communication rounds for distributed learning.
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, 2021

Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Improved Guarantees and a Multiple-descent Curve for Column Subset Selection and the Nystrom Method (Extended Abstract).
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

Sparse sketches with small inversion bias.
Proceedings of the Conference on Learning Theory, 2021

Query complexity of least absolute deviation regression via robust uniform convergence.
Proceedings of the Conference on Learning Theory, 2021

2020
Determinantal Point Processes in Randomized Numerical Linear Algebra.
CoRR, 2020

Improved guarantees and a multiple-descent curve for the Column Subset Selection Problem and the Nyström method.
CoRR, 2020

Exact expressions for double descent and implicit regularization via surrogate random design.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Precise expressions for random projections: Low-rank approximation and randomized Newton.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Sampling from a k-DPP without looking at all items.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Isotropy and Log-Concave Polynomials: Accelerated Sampling and High-Precision Counting of Matroid Bases.
Proceedings of the 61st IEEE Annual Symposium on Foundations of Computer Science, 2020

Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

Bayesian experimental design using regularized determinantal point processes.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
Convergence Analysis of the Randomized Newton Method with Determinantal Sampling.
CoRR, 2019

Distributed estimation of the inverse Hessian by determinantal averaging.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Exact sampling of determinantal point processes with sublinear time preprocessing.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression.
Proceedings of the Conference on Learning Theory, 2019

Fast determinantal point processes via distortion-free intermediate sampling.
Proceedings of the Conference on Learning Theory, 2019

Correcting the bias in least squares regression with volume-rescaled sampling.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Volume sampling for linear regression.
PhD thesis, 2018

Reverse Iterative Volume Sampling for Linear Regression.
J. Mach. Learn. Res., 2018

Tail bounds for volume sampled linear regression.
CoRR, 2018

Leveraged volume sampling for linear regression.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Discovering Surprising Documents with Context-Aware Word Representations.
Proceedings of the 23rd International Conference on Intelligent User Interfaces, 2018

Subsampling for Ridge Regression via Regularized Volume Sampling.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018

Batch-Expansion Training: An Efficient Optimization Framework.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018

2017
Batch-Expansion Training: An Efficient Optimization Paradigm for Machine Learning.
CoRR, 2017

Unbiased estimates for linear regression via volume sampling.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

2014
The limits of squared Euclidean distance regularization.
Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 2014


  Loading...