Mert Gürbüzbalaban

Orcid: 0000-0002-0575-2450

According to our database1, Mert Gürbüzbalaban authored at least 62 papers between 2010 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Robust Accelerated Primal-Dual Methods for Computing Saddle Points.
SIAM J. Optim., March, 2024

Differential Privacy of Noisy (S)GD under Heavy-Tailed Perturbations.
CoRR, 2024

2023
Boundary Conditions for Linear Exit Time Gradient Trajectories Around Saddle Points: Analysis and Algorithm.
IEEE Trans. Inf. Theory, April, 2023

Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than Constant Stepsize.
Trans. Mach. Learn. Res., 2023

Accelerated gradient methods for nonconvex optimization: Escape trajectories from strict saddle points and convergence to local minima.
CoRR, 2023

Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD.
CoRR, 2023

Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions.
Proceedings of the International Conference on Machine Learning, 2023

Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on Least Squares.
Proceedings of the International Conference on Algorithmic Learning Theory, 2023

2022
Randomized Gossiping With Effective Resistance Weights: Performance Guarantees and Applications.
IEEE Trans. Control. Netw. Syst., 2022

Differentially Private Accelerated Optimization Algorithms.
SIAM J. Optim., 2022

A Stochastic Subgradient Method for Distributionally Robust Non-convex and Non-smooth Learning.
J. Optim. Theory Appl., 2022

Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks.
J. Mach. Learn. Res., 2022

Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Nonconvex Stochastic Optimization: Nonasymptotic Performance Bounds and Momentum-Based Acceleration.
Oper. Res., 2022

Penalized Langevin and Hamiltonian Monte Carlo Algorithms for Constrained Sampling.
CoRR, 2022

Heavy-Tail Phenomenon in Decentralized SGD.
CoRR, 2022

A Variance-Reduced Stochastic Accelerated Primal Dual Algorithm.
CoRR, 2022

HyLo: A Hybrid Low-Rank Natural Gradient Descent Method.
Proceedings of the SC22: International Conference for High Performance Computing, 2022

SAPD+: An Accelerated Stochastic Method for Nonconvex-Concave Minimax Problems.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
Why random reshuffling beats stochastic gradient descent.
Math. Program., 2021

Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo.
J. Mach. Learn. Res., 2021

TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion.
CoRR, 2021

Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

The Heavy-Tail Phenomenon in SGD.
Proceedings of the 38th International Conference on Machine Learning, 2021

Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections.
Proceedings of the 38th International Conference on Machine Learning, 2021

L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method.
Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), 2021

Fractional moment-preserving initialization schemes for training deep neural networks.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions.
SIAM J. Optim., 2020

Randomness and permutations in coordinate descent methods.
Math. Program., 2020

A Stochastic Subgradient Method for Distributionally Robust Non-Convex Learning.
CoRR, 2020

Fractional moment-preserving initialization schemes for training fully-connected neural networks.
CoRR, 2020

Breaking Reversibility Accelerates Langevin Dynamics for Non-Convex Optimization.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

ASYNC: A Cloud Engine with Asynchrony and History for Distributed Machine Learning.
Proceedings of the 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2020

Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise.
Proceedings of the 37th International Conference on Machine Learning, 2020

DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
Convergence Rate of Incremental Gradient and Incremental Newton Methods.
SIAM J. Optim., 2019

On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep Neural Networks.
CoRR, 2019

ASYNC: Asynchronous Machine Learning on Distributed Systems.
CoRR, 2019

First Exit Time Analysis of Stochastic Gradient Descent Under Heavy-Tailed Gradient Noise.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

A Universally Optimal Multistage Accelerated Stochastic Gradient Method.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks.
Proceedings of the 36th International Conference on Machine Learning, 2019

Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods.
SIAM J. Optim., 2018

Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate.
SIAM J. Optim., 2018

Breaking Reversibility Accelerates Langevin Dynamics for Global Non-Convex Optimization.
CoRR, 2018

Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration.
CoRR, 2018

Reducing Communication in Proximal Newton Methods for Sparse Least Squares Problems.
Proceedings of the 47th International Conference on Parallel Processing, 2018

2017
Approximating the Real Structured Stability Radius with Frobenius-Norm Bounded Perturbations.
SIAM J. Matrix Anal. Appl., 2017

On the Convergence Rate of Incremental Aggregated Gradient Algorithms.
SIAM J. Optim., 2017

Polynomial root radius optimization with affine constraints.
Math. Program., 2017

Avoiding Communication in Proximal Methods for Convex Optimization Problems.
CoRR, 2017

When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

A double incremental aggregated gradient method with linear convergence rate for large-scale optimization.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

Decentralized computation of effective resistances and acceleration of consensus algorithms.
Proceedings of the 2017 IEEE Global Conference on Signal and Information Processing, 2017

2016
Global convergence rate of incremental aggregated gradient methods for nonsmooth problems.
Proceedings of the 55th IEEE Conference on Decision and Control, 2016

2015
A globally convergent incremental Newton method.
Math. Program., 2015

2013
Fast Approximation of the H<sub>INFINITY</sub> Norm via Optimization over Spectral Value Sets.
SIAM J. Matrix Anal. Appl., 2013

2012
Explicit Solutions for Root Optimization of a Polynomial Family With One Affine Constraint.
IEEE Trans. Autom. Control., 2012

Some Regularity Results for the Pseudospectral Abscissa and Pseudospectral Radius of a Matrix.
SIAM J. Optim., 2012

2010
Explicit solutions for root optimization of a polynomial family.
Proceedings of the 49th IEEE Conference on Decision and Control, 2010


  Loading...