Zachary Charles

Orcid: 0000-0001-8997-874X

Affiliations:
  • Google Research
  • University of Wisconsin-Madison, Department of Electrical and Computer Engineering, Madison, WI, USA (PhD 2017)
  • University of Pennsylvania, Department of Mathematics, Philadelphia, PA, USA


According to our database1, Zachary Charles authored at least 41 papers between 2013 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Leveraging Function Space Aggregation for Federated Learning at Scale.
Trans. Mach. Learn. Res., 2024

Fine-Tuning Large Language Models with User-Level Differential Privacy.
CoRR, 2024

FAX: Scalable and Differentiable Federated Primitives in JAX.
CoRR, 2024

2023
Convergence of Gradient Descent with Linearly Correlated Noise and Applications to Differentially Private Learning.
CoRR, 2023

Federated Automatic Differentiation.
CoRR, 2023

Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

A Rate-Distortion View on Model Updates.
Proceedings of the First Tiny Papers Track at ICLR 2023, 2023

2022
Federated Select: A Primitive for Communication- and Memory-Efficient Federated Learning.
CoRR, 2022

Motley: Benchmarking Heterogeneity and Personalization in Federated Learning.
CoRR, 2022

Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory.
CoRR, 2022

Does Federated Dropout actually work?
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022

Iterated Vector Fields and Conservatism, with Applications to Federated Learning.
Proceedings of the International Conference on Algorithmic Learning Theory, 29 March, 2022

2021
Advances and Open Problems in Federated Learning.
Found. Trends Mach. Learn., 2021

A Field Guide to Federated Optimization.
CoRR, 2021

Local Adaptivity in Federated Learning: Convergence and Consistency.
CoRR, 2021

On Large-Cohort Training for Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Adaptive Federated Optimization.
Proceedings of the 9th International Conference on Learning Representations, 2021

Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
On the Outsized Importance of Learning Rates in Local Update Methods.
CoRR, 2020

2019
Advances and Open Problems in Federated Learning.
CoRR, 2019

Improving the convergence of SGD through adaptive batch sizes.
CoRR, 2019

Convergence and Margin of Adversarial Training on Separable Data.
CoRR, 2019

ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding.
CoRR, 2019

DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Does Data Augmentation Lead to Positive Margin?
Proceedings of the 36th International Conference on Machine Learning, 2019

A Geometric Perspective on the Transferability of Adversarial Directions.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Generating random factored ideals in number fields.
Math. Comput., 2018

Exploiting algebraic structure in global optimization and the Belgian chocolate problem.
J. Glob. Optim., 2018

Gradient Coding via the Stochastic Block Model.
CoRR, 2018

DRACO: Robust Distributed Training via Redundant Gradients.
CoRR, 2018

ATOMO: Communication-efficient Learning via Atomic Sparsification.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Gradient Coding Using the Stochastic Block Model.
Proceedings of the 2018 IEEE International Symposium on Information Theory, 2018

Distributions of the Number of Solutions to the Network Power Flow Equations.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2018

DRACO: Byzantine-resilient Distributed Training via Redundant Gradients.
Proceedings of the 35th International Conference on Machine Learning, 2018

Stability and Generalization of Learning Algorithms that Converge to Global Optima.
Proceedings of the 35th International Conference on Machine Learning, 2018

Sparse Subspace Clustering with Missing and Corrupted Data.
Proceedings of the 2018 IEEE Data Science Workshop, 2018

2017
Approximate Gradient Coding via Sparse Random Graphs.
CoRR, 2017

Efficiently finding all power flow solutions to tree networks.
Proceedings of the 55th Annual Allerton Conference on Communication, 2017

2013
Nonpositive Eigenvalues of Hollow, Symmetric, Nonnegative Matrices.
SIAM J. Matrix Anal. Appl., 2013

Nonpositive eigenvalues of the adjacency matrix and lower bounds for Laplacian eigenvalues.
Discret. Math., 2013


  Loading...