Adrien B. Taylor

Orcid: 0000-0003-2509-1765

According to our database1, Adrien B. Taylor authored at least 33 papers between 2017 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization.
SIAM J. Optim., September, 2023

An optimal gradient method for smooth strongly convex minimization.
Math. Program., May, 2023

Principled analyses and design of first-order methods with inexact proximal operators.
Math. Program., 2023

Counter-Examples in First-Order Optimization: A Constructive Approach.
IEEE Control. Syst. Lett., 2023

Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity.
Proceedings of the International Conference on Machine Learning, 2023

On Fundamental Proof Structures in First-Order Optimization.
Proceedings of the 62nd IEEE Conference on Decision and Control, 2023

2022
A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives.
Open J. Math. Optim., March, 2022

Convergence of a Constrained Vector Extrapolation Scheme.
SIAM J. Math. Data Sci., 2022

Optimal complexity and certification of Bregman first-order methods.
Math. Program., 2022

On the oracle complexity of smooth strongly convex minimization.
J. Complex., 2022

PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python.
CoRR, 2022

PROX-QP: Yet another Quadratic Programming Solver for Robotics and beyond.
Proceedings of the Robotics: Science and Systems XVIII, New York City, NY, USA, June 27, 2022

Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Super-Acceleration with Cyclical Step-sizes.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
Acceleration Methods.
Found. Trends Optim., 2021

A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip.
CoRR, 2021

A Continuized View on Nesterov Acceleration.
CoRR, 2021

An optimal gradient method for smooth (possibly strongly) convex minimization.
CoRR, 2021

Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection.
SIAM J. Optim., 2020

Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation.
SIAM J. Optim., 2020

Efficient first-order methods for convex minimization: a constructive approach.
Math. Program., 2020

Convergence of Constrained Anderson Acceleration.
CoRR, 2020

Complexity Guarantees for Polyak Steps with Momentum.
Proceedings of the Conference on Learning Theory, 2020

2019
Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions.
Proceedings of the Conference on Learning Theory, 2019

2018
Exact Worst-Case Convergence Rates of the Proximal Gradient Method for Composite Convex Minimization.
J. Optim. Theory Appl., 2018

Lyapunov Functions for First-Order Methods: Tight Automated Convergence Guarantees.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Convex interpolation and performance estimation of first-order methods for convex optimization.
PhD thesis, 2017

Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization.
SIAM J. Optim., 2017

On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions.
Optim. Lett., 2017

Smooth strongly convex interpolation and exact worst-case performance of first-order methods.
Math. Program., 2017

Performance estimation toolbox (PESTO): Automated worst-case analysis of first-order optimization methods.
Proceedings of the 56th IEEE Annual Conference on Decision and Control, 2017


  Loading...