Kimon Antonakopoulos

Orcid: 0000-0002-2931-4420

According to our database1, Kimon Antonakopoulos authored at least 23 papers between 2019 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Generalized Gradient Norm Clipping & Non-Euclidean (L<sub>0</sub>,L<sub>1</sub>)-Smoothness.
CoRR, June, 2025

Layer-wise Quantization for Quantized Optimistic Dual Averaging.
CoRR, May, 2025

Multi-Step Alignment as Markov Games: An Optimistic Online Gradient Descent Approach with Convergence Guarantees.
CoRR, February, 2025

Training Deep Learning Models with Norm-Constrained LMOs.
CoRR, February, 2025

2024
On the Generalization of Stochastic Gradient Descent with Momentum.
J. Mach. Learn. Res., 2024

Improving SAM Requires Rethinking its Optimization Formulation.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Universal Gradient Methods for Stochastic Convex Optimization.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Advancing the Lower Bounds: an Accelerated, Stochastic, Second-order Method with Optimal Adaptation to Inexactness.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Distributed Extra-gradient with Optimal Complexity and Communication Guarantees.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Routing in an Uncertain World: Adaptivity, Efficiency, and Equilibrium.
CoRR, 2022

Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

No-regret learning in games with noisy feedback: Faster rates and adaptivity via learning rate separation.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Extra-Newton: A First Approach to Noise-Adaptive Accelerated Second-Order Methods.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

UnderGrad: A Universal Black-Box Optimization Method with Almost Dimension-Free Convergence Rate Guarantees.
Proceedings of the International Conference on Machine Learning, 2022

AdaGrad Avoids Saddle Points.
Proceedings of the International Conference on Machine Learning, 2022

2021
Adaptive first-order methods revisited: Convex optimization without Lipschitz requirements.
CoRR, 2021

Fast Routing under Uncertainty: Adaptive Learning in Congestion Games via Exponential Weights.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Sifting through the noise: Universal first-order methods for stochastic variational inequalities.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Adaptive First-Order Methods Revisited: Convex Minimization without Lipschitz Requirements.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Adaptive Extra-Gradient Methods for Min-Max Optimization and Games.
Proceedings of the 9th International Conference on Learning Representations, 2021

Adaptive Learning in Continuous Games: Optimal Regret Bounds and Convergence to Nash Equilibrium.
Proceedings of the Conference on Learning Theory, 2021

2020
Online and stochastic optimization beyond Lipschitz continuity: A Riemannian approach.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
An adaptive Mirror-Prox method for variational inequalities with singular operators.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019


  Loading...