Quoc Tran-Dinh

Orcid: 0000-0002-1077-2579

According to our database1, Quoc Tran-Dinh authored at least 60 papers between 2013 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
From Halpern's fixed-point iterations to Nesterov's accelerated interpretations for root-finding problems.
Comput. Optim. Appl., January, 2024

Shuffling Momentum Gradient Algorithm for Convex Optimization.
CoRR, 2024

2023
A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates.
Optim. Methods Softw., January, 2023

2022
New Primal-Dual Algorithms for a Class of Nonsmooth and Nonlinear Convex-Concave Minimax Problems.
SIAM J. Optim., 2022

A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm.
Optim. Lett., 2022

A hybrid stochastic optimization framework for composite nonconvex optimization.
Math. Program., 2022

A New Homotopy Proximal Variable-Metric Framework for Composite Convex Minimization.
Math. Oper. Res., 2022

A Newton Frank-Wolfe method for constrained self-concordant minimization.
J. Glob. Optim., 2022

2021
A Unified Convergence Analysis for Shuffling-Type Gradient Methods.
J. Mach. Learn. Res., 2021

Identifying Heterogeneous Effect Using Latent Supervised Clustering With Adaptive Fusion.
J. Comput. Graph. Stat., 2021

Federated Learning with Randomized Douglas-Rachford Splitting Methods.
CoRR, 2021

A Lyapunov function for the combined system-optimizer dynamics in inexact model predictive control.
Autom., 2021

Minimization of a class of rare event probabilities and buffered probabilities of exceedance.
Ann. Oper. Res., 2021

Improved Complexity Of Trust-Region Optimization For Zeroth-Order Stochastic Oracles with Adaptive Sampling.
Proceedings of the Winter Simulation Conference, 2021

FedDR - Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

SMG: A Shuffling Gradient-Based Method with Momentum.
Proceedings of the 38th International Conference on Machine Learning, 2021

Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates.
SIAM J. Optim., 2020

An adaptive primal-dual framework for nonsmooth convex minimization.
Math. Program. Comput., 2020

An Inexact Interior-Point Lagrangian Decomposition Algorithm with Inexact Oracles.
J. Optim. Theory Appl., 2020

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization.
J. Mach. Learn. Res., 2020

Shuffling Gradient-Based Methods with Momentum.
CoRR, 2020

Convergence Analysis of Homotopy-SGD for non-convex optimization.
CoRR, 2020

Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise.
CoRR, 2020

Composite convex optimization with global and local inexact oracles.
Comput. Optim. Appl., 2020

Hybrid Variance-Reduced SGD Algorithms For Minimax Problems with Nonconvex-Linear Function.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Stochastic Gauss-Newton Algorithms for Nonconvex Compositional Optimization.
Proceedings of the 37th International Conference on Machine Learning, 2020

Transferring Optimality Across Data Distributions via Homotopy Methods.
Proceedings of the 8th International Conference on Learning Representations, 2020

A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
Sieve-SDP: a simple facial reduction algorithm to preprocess semidefinite programs.
Math. Program. Comput., 2019

Self-concordant inclusions: a unified framework for path-following generalized Newton-type algorithms.
Math. Program., 2019

Generalized self-concordant functions: a recipe for Newton-type methods.
Math. Program., 2019

Robust multicategory support matrix machines.
Math. Program., 2019

Using positive spanning sets to achieve stationarity with the Boosted DC Algorithm.
CoRR, 2019

A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization.
CoRR, 2019

Proximal alternating penalty algorithms for nonsmooth constrained convex optimization.
Comput. Optim. Appl., 2019

Non-stationary Douglas-Rachford and alternating direction method of multipliers: adaptive step-sizes and convergence.
Comput. Optim. Appl., 2019

Contraction Estimates for Abstract Real-Time Algorithms for NMPC.
Proceedings of the 58th IEEE Conference on Decision and Control, 2019

2018
A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization.
SIAM J. Optim., 2018

A Single-Phase, Proximal Path-Following Framework.
Math. Oper. Res., 2018

Non-Ergodic Alternating Proximal Augmented Lagrangian Algorithms with Optimal Rates.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

2017
Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization.
Optim. Lett., 2017

Adaptive smoothing algorithms for nonsmooth composite convex minimization.
Comput. Optim. Appl., 2017

Smooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

2016
Simplicial Nonnegative Matrix Tri-factorization: Fast Guaranteed Parallel Algorithm.
Proceedings of the Neural Information Processing - 23rd International Conference, 2016

Frank-Wolfe works for non-Lipschitz continuous gradient objectives: Scalable poisson phase retrieval.
Proceedings of the 2016 IEEE International Conference on Acoustics, 2016

Convex Block-sparse Linear Regression with Expanders - Provably.
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 2016

2015
Composite self-concordant minimization.
J. Mach. Learn. Res., 2015

Structured Sparsity: Discrete and Convex approaches.
CoRR, 2015

A Universal Primal-Dual Convex Optimization Framework.
Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, 2015

Composite Convex Minimization Involving Self-concordant-Like Cost Functions.
Proceedings of the Modelling, Computation and Optimization in Information Systems and Management Sciences - Proceedings of the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences, 2015

A primal-dual framework for mixtures of regularizers.
Proceedings of the 23rd European Signal Processing Conference, 2015

WASP: Scalable Bayes via barycenters of subset posteriors.
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, 2015

2014
Convexity in Source Separation : Models, geometry, and algorithms.
IEEE Signal Process. Mag., 2014

An Inexact Proximal Path-Following Algorithm for Constrained Convex Minimization.
SIAM J. Optim., 2014

Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC.
SIAM J. Control. Optim., 2014

Constrained convex minimization via model-based excessive gap.
Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 2014

Barrier smoothing for nonsmooth convex minimization.
Proceedings of the IEEE International Conference on Acoustics, 2014

Scalable Sparse Covariance Estimation via Self-Concordance.
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014

2013
A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions.
Proceedings of the 30th International Conference on Machine Learning, 2013


  Loading...