Amir Daneshmand

According to our database1, Amir Daneshmand authored at least 12 papers between 2014 and 2022.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2022
Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation.
SIAM J. Optim., 2022

2021
Newton Method over Networks is Fast up to the Statistical Precision.
Proceedings of the 38th International Conference on Machine Learning, 2021

An Accelerated Second-Order Method for Distributed Stochastic Optimization.
Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), 2021

2020
Second-Order Guarantees of Distributed Gradient Algorithms.
SIAM J. Optim., 2020

2019
Decentralized Dictionary Learning Over Time-Varying Digraphs.
J. Mach. Learn. Res., 2019

Convergence Rate of Distributed Optimization Algorithms Based on Gradient Tracking.
CoRR, 2019

2018
Second-order Guarantees of Gradient Algorithms over Networks.
Proceedings of the 56th Annual Allerton Conference on Communication, 2018

2017
D2L: Decentralized dictionary learning over dynamic networks.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

2016
Distributed dictionary learning.
Proceedings of the 50th Asilomar Conference on Signals, Systems and Computers, 2016

2015
Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization.
IEEE Trans. Signal Process., 2015

2014
Hybrid Random/Deterministic Parallel Algorithms for Nonconvex Big Data Optimization.
CoRR, 2014

Flexible selective parallel algorithms for big data optimization.
Proceedings of the 48th Asilomar Conference on Signals, Systems and Computers, 2014


  Loading...