Dan Alistarh

Orcid: 0000-0003-3650-940X

Affiliations:
  • IST Austria, Klosterneuburg, Austria
  • MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, USA (former)


According to our database1, Dan Alistarh authored at least 187 papers between 2008 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Extreme Compression of Large Language Models via Additive Quantization.
CoRR, 2024

RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation.
CoRR, 2024

2023
Distributed Computing Column 86 <i>The Environmental Cost of Our Conferences</i>.
SIGACT News, December, 2023

Distributed Computing Column 87 Recent Advances in Multi-Pass Graph Streaming Lower Bounds.
SIGACT News, September, 2023

The splay-list: a distribution-adaptive concurrent skip-list.
Distributed Comput., September, 2023

Why Extension-Based Proofs Fail.
SIAM J. Comput., August, 2023

A Brief Summary of PODC 2022.
SIGACT News, March, 2023

Distributed Computing Column 86: A Summary of PODC 2022.
SIGACT News, March, 2023

Wait-free approximate agreement on graphs.
Theor. Comput. Sci., February, 2023

CQS: A Formally-Verified Framework for Fair and Abortable Synchronization.
Proc. ACM Program. Lang., 2023

How to Prune Your Language Model: Recovering Accuracy on the "Sparsity May Cry" Benchmark.
CoRR, 2023

ELSA: Partial Weight Freezing for Overhead-Free Sparse Network Deployment.
CoRR, 2023

AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms.
CoRR, 2023

QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models.
CoRR, 2023

Towards End-to-end 4-Bit Inference on Generative Large Language Models.
CoRR, 2023

Sparse Fine-tuning for Inference Acceleration of Large Language Models.
CoRR, 2023

Efficient Self-Adjusting Search Trees via Lazy Updates.
CoRR, 2023

Wait-free Trees with Asymptotically-Efficient Range Queries.
CoRR, 2023

SPADE: Sparsity-Guided Debugging for Deep Neural Networks.
CoRR, 2023

Scaling Laws for Sparsely-Connected Foundation Models.
CoRR, 2023

Accurate Neural Network Pruning Requires Rethinking Sparse Optimization.
CoRR, 2023

Repeated Game Dynamics in Population Protocols.
CoRR, 2023

QIGen: Generating Efficient Kernels for Quantized Inference on Large Language Models.
CoRR, 2023

Decentralized Learning Dynamics in the Gossip Model.
CoRR, 2023

Error Feedback Can Accurately Compress Preconditioners.
CoRR, 2023

SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression.
CoRR, 2023

Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression.
CoRR, 2023

SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks.
CoRR, 2023

ZipLM: Hardware-Aware Structured Pruning of Language Models.
CoRR, 2023

Provably-Efficient and Internally-Deterministic Parallel Union-Find.
Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures, 2023

Fast and Scalable Channels in Kotlin Coroutines.
Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, 2023

Knowledge Distillation Performs Partial Variance Reduction.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

CAP: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

ZipLM: Inference-Aware Structured Pruning of Language Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks at the Edge.
Proceedings of the International Conference on Machine Learning, 2023

Quantized Distributed Training of Large Models with Convergence Guarantees.
Proceedings of the International Conference on Machine Learning, 2023

SparseGPT: Massive Language Models Can be Accurately Pruned in One-Shot.
Proceedings of the International Conference on Machine Learning, 2023

CrAM: A Compression-Aware Minimizer.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

OPTQ: Accurate Quantization for Generative Pre-trained Transformers.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Lincheck: A Practical Framework for Testing Concurrent Data Structures on JVM.
Proceedings of the Computer Aided Verification - 35th International Conference, 2023

2022
Elastic Consistency: A Consistency Criterion for Distributed Optimization.
SIGACT News, 2022

Distributed Computing Column 85 Elastic Consistency: A Consistency Criterion for Distributed Optimization.
SIGACT News, 2022

L-GreCo: An Efficient and General Framework for Layerwise-Adaptive Gradient Compression.
CoRR, 2022

GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers.
CoRR, 2022

oViT: An Accurate Second-Order Pruning Framework for Vision Transformers.
CoRR, 2022

Hybrid Decentralized Optimization: First- and Zeroth-Order Optimizers Can Be Jointly Leveraged For Faster Convergence.
CoRR, 2022

GMP*: Well-Tuned Global Magnitude Pruning Can Outperform Most BERT-Pruning Methods.
CoRR, 2022

CrAM: A Compression-Aware Minimizer.
CoRR, 2022

QuAFL: Federated Averaging Can Be Both Asynchronous and Communication-Efficient.
CoRR, 2022

Scaling the Wild: Decentralizing Hogwild!-style Shared-memory SGD.
CoRR, 2022

Dynamic Averaging Load Balancing on Cycles.
Algorithmica, 2022

Multi-queues can be state-of-the-art priority schedulers.
Proceedings of the PPoPP '22: 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Seoul, Republic of Korea, April 2, 2022

PathCAS: an efficient middle ground for concurrent search data structures.
Proceedings of the PPoPP '22: 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Seoul, Republic of Korea, April 2, 2022

Near-Optimal Leader Election in Population Protocols on Graphs.
Proceedings of the PODC '22: ACM Symposium on Principles of Distributed Computing, Salerno, Italy, July 25, 2022

Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

CGX: adaptive system support for communication-efficient deep learning.
Proceedings of the Middleware '22: 23rd International Middleware Conference, Quebec, QC, Canada, November 7, 2022

SPDY: Accurate Pruning with Speedup Guarantees.
Proceedings of the International Conference on Machine Learning, 2022

The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

How Well Do Sparse ImageNet Models Transfer?
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
Breaking (Global) Barriers in Parallel Stochastic Optimization With Wait-Avoiding Group Averaging.
IEEE Trans. Parallel Distributed Syst., 2021

Distributed Computing Column 84: Perspectives on the Paper "CCS Expressions, Finite State Processes, and Three Problems of Equivalence".
SIGACT News, 2021

Distributed Computing Column 83 Five Ways Not To Fool Yourself: Designing Experiments for Understanding Performance.
SIGACT News, 2021

Distributed Computing Column 82 <i>Distributed Computability</i>: <i>A Few Results Masters Students Should Know</i>.
SIGACT News, 2021

Distributed Computing Column 81: Byzantine Agreement with Less Communication: Recent Advances.
SIGACT News, 2021

NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization.
J. Mach. Learn. Res., 2021

Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks.
J. Mach. Learn. Res., 2021

A Formally-Verified Framework for Fair Synchronization in Kotlin Coroutines.
CoRR, 2021

Project CGX: Scalable Deep Learning on Commodity GPUs.
CoRR, 2021

SSSE: Efficiently Erasing Samples from Trained Machine Learning Models.
CoRR, 2021

Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization.
CoRR, 2021

Brief Announcement: Fast Graphical Population Protocols.
Proceedings of the 35th International Symposium on Distributed Computing, 2021

Lower Bounds for Shared-Memory Leader Election Under Bounded Write Contention.
Proceedings of the 35th International Symposium on Distributed Computing, 2021

A Scalable Concurrent Algorithm for Dynamic Connectivity.
Proceedings of the SPAA '21: 33rd ACM Symposium on Parallelism in Algorithms and Architectures, 2021

Collecting Coupons is Faster with Friends.
Proceedings of the Structural Information and Communication Complexity, 2021

Comparison Dynamics in Population Protocols.
Proceedings of the PODC '21: ACM Symposium on Principles of Distributed Computing, 2021

Fast Graphical Population Protocols.
Proceedings of the 25th International Conference on Principles of Distributed Systems, 2021

AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Asynchronous Decentralized SGD with Quantized and Local Updates.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Towards Tight Communication Lower Bounds for Distributed Optimisation.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

M-FAC: Efficient Matrix-Free Approximations of Second-Order Information.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Distributed Principal Component Analysis with Limited Communication.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Communication-Efficient Distributed Optimization with Quantized Preconditioners.
Proceedings of the 38th International Conference on Machine Learning, 2021

New Bounds For Distributed Mean Estimation and Variance Reduction.
Proceedings of the 9th International Conference on Learning Representations, 2021

Byzantine-Resilient Non-Convex Stochastic Gradient Descent.
Proceedings of the 9th International Conference on Learning Representations, 2021

Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Compressive Sensing Using Iterative Hard Thresholding With Low Precision Data Representation: Theory and Applications.
IEEE Trans. Signal Process., 2020

Distributed Computing Column 80: Annual Review 2020.
SIGACT News, 2020

Distributed Computing Column 79: Using Round Elimination to Understand Locality.
SIGACT News, 2020

Distributed Computing Column 78: 60 Years of Mastering Concurrent Computing through Sequential Thinking.
SIGACT News, 2020

Distributed Computing Column 77 Consensus Dynamics: An Overview.
SIGACT News, 2020

Improved Communication Lower Bounds for Distributed Optimisation.
CoRR, 2020

Fast General Distributed Transactions with Opacity using Global Time.
CoRR, 2020

Stochastic Gradient Langevin with Delayed Gradients.
CoRR, 2020

Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging.
CoRR, 2020

WoodFisher: Efficient second-order approximations for model compression.
CoRR, 2020

Robust Comparison in Population Protocols.
CoRR, 2020

Relaxed Scheduling for Scalable Belief Propagation.
CoRR, 2020

Distributed Mean Estimation with Optimal Error Bounds.
CoRR, 2020

Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent.
CoRR, 2020

Analysis and Evaluation of Non-Blocking Interpolation Search Trees.
CoRR, 2020

Memory Tagging: Minimalist Synchronization for Scalable Concurrent Data Structures.
Proceedings of the SPAA '20: 32nd ACM Symposium on Parallelism in Algorithms and Architectures, 2020

Taming unbalanced training workloads in deep learning with partial collective operations.
Proceedings of the PPoPP '20: 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2020

Testing concurrency on the JVM with lincheck.
Proceedings of the PPoPP '20: 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2020

Non-blocking interpolation search trees with doubly-logarithmic running time.
Proceedings of the PPoPP '20: 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2020

Brief Announcement: Why Extension-Based Proofs Fail.
Proceedings of the PODC '20: ACM Symposium on Principles of Distributed Computing, 2020

WoodFisher: Efficient Second-Order Approximation for Neural Network Compression.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Adaptive Gradient Quantization for Data-Parallel SGD.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Scalable Belief Propagation via Relaxed Scheduling.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks.
Proceedings of the 37th International Conference on Machine Learning, 2020

On the Sample Complexity of Adversarial Multi-Source PAC Learning.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Distributed Computing Column 76: Annual Review 2019.
SIGACT News, 2019

PopSGD: Decentralized Stochastic Gradient Descent in the Population Model.
CoRR, 2019

Performance Prediction for Coarse-Grained Locking.
CoRR, 2019

SysML: The New Frontier of Machine Learning Systems.
CoRR, 2019

Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers.
Proceedings of the 31st ACM on Symposium on Parallelism in Algorithms and Architectures, 2019

SparCML: high-performance sparse communication for machine learning.
Proceedings of the International Conference for High Performance Computing, 2019

Lock-free channels for programming via communicating sequential processes: poster.
Proceedings of the 24th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2019

In Search of the Fastest Concurrent Union-Find Algorithm.
Proceedings of the 23rd International Conference on Principles of Distributed Systems, 2019

Powerset Convolutional Neural Networks.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Distributed Learning over Unreliable Networks.
Proceedings of the 36th International Conference on Machine Learning, 2019

Scalable FIFO Channels for Programming via Communicating Sequential Processes.
Proceedings of the Euro-Par 2019: Parallel Processing, 2019

2018
ThreadScan: Automatic and Scalable Memory Reclamation.
ACM Trans. Parallel Comput., 2018

Recent Algorithmic Advances in Population Protocols.
SIGACT News, 2018

Inherent limitations of hybrid transactional memory.
Distributed Comput., 2018

Communication-efficient randomized consensus.
Distributed Comput., 2018

SparCML: High-Performance Sparse Communication for Machine Learning.
CoRR, 2018

Compressive Sensing with Low Precision Data Representation: Radio Astronomy and Beyond.
CoRR, 2018

DataBright: Towards a Global Exchange for Decentralized Data Ownership and Trusted Computation.
CoRR, 2018

The Transactional Conflict Problem.
Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures, 2018

Distributionally Linearizable Data Structures.
Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures, 2018

Space-Optimal Majority in Population Protocols.
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, 2018

Fast Quantized Arithmetic on x86: Trading Compute for Data Movement.
Proceedings of the 2018 IEEE International Workshop on Signal Processing Systems, 2018

The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory.
Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, 2018

Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms.
Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, 2018

A Brief Tutorial on Distributed and Concurrent Machine Learning.
Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, 2018

Session details: Session 1B: Shared Memory Theory.
Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, 2018

Brief Announcement: Performance Prediction for Coarse-Grained Locking.
Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, 2018

The Convergence of Sparsified Gradient Methods.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Byzantine Stochastic Gradient Descent.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Model compression via distillation and quantization.
Proceedings of the 6th International Conference on Learning Representations, 2018

Synchronous Multi-GPU Training for Deep Learning with Low-Precision Communications: An Empirical Study.
Proceedings of the 21st International Conference on Extending Database Technology, 2018

Gradient compression for communication-limited convex optimization.
Proceedings of the 57th IEEE Conference on Decision and Control, 2018

2017
Lease/Release: Architectural Support for Scaling Contended Data Structures.
ACM Trans. Parallel Comput., 2017

Time-Space Trade-offs in Population Protocols.
Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, 2017

The Power of Choice in Priority Scheduling.
Proceedings of the ACM Symposium on Principles of Distributed Computing, 2017

QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning.
Proceedings of the 34th International Conference on Machine Learning, 2017

FPGA-Accelerated Dense Linear Machine Learning: A Precision-Convergence Trade-Off.
Proceedings of the 25th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, 2017

Forkscan: Conservative Memory Reclamation for Modern Operating Systems.
Proceedings of the Twelfth European Conference on Computer Systems, 2017

Robust Detection in Leak-Prone Population Protocols.
Proceedings of the DNA Computing and Molecular Programming - 23rd International Conference, 2017

Towards unlicensed cellular networks in TV white spaces.
Proceedings of the 13th International Conference on emerging Networking EXperiments and Technologies, 2017

2016
Are Lock-Free Concurrent Algorithms Practically Wait-Free?
J. ACM, 2016

ZipML: An End-to-end Bitwise Framework for Dense Generalized Linear Models.
CoRR, 2016

QSGD: Randomized Quantization for Communication-Optimal Stochastic Gradient Descent.
CoRR, 2016

2015
The Renaming Problem: Recent Developments and Open Questions.
Bull. EATCS, 2015

Polylogarithmic-Time Leader Election in Population Protocols Using Polylogarithmic States.
CoRR, 2015

A High-Radix, Low-Latency Optical Switch for Data Centers.
Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, 2015

The SprayList: a scalable relaxed priority queue.
Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2015

Lock-Free Algorithms under Stochastic Schedulers.
Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing, 2015

How To Elect a Leader Faster than a Tournament.
Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing, 2015

Fast and Exact Majority in Population Protocols.
Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing, 2015

Streaming Min-max Hypergraph Partitioning.
Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, 2015

Polylogarithmic-Time Leader Election in Population Protocols.
Proceedings of the Automata, Languages, and Programming - 42nd International Colloquium, 2015

2014
Tight Bounds for Asynchronous Renaming.
J. ACM, 2014

Dynamic Task Allocation in Asynchronous Shared Memory.
Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, 2014

Balls-into-leaves: sub-logarithmic renaming in synchronous message-passing systems.
Proceedings of the ACM Symposium on Principles of Distributed Computing, 2014

Brief announcement: are lock-free concurrent algorithms practically wait-free?
Proceedings of the ACM Symposium on Principles of Distributed Computing, 2014

The LevelArray: A Fast, Practical Long-Lived Renaming Algorithm.
Proceedings of the IEEE 34th International Conference on Distributed Computing Systems, 2014

StackTrack: an automated transactional approach to concurrent memory reclamation.
Proceedings of the Ninth Eurosys Conference 2014, 2014

Distributed Algorithms.
Proceedings of the Computing Handbook, 2014

2013
Randomized loose renaming in <i>O</i>(log log <i>n</i>) time.
Proceedings of the ACM Symposium on Principles of Distributed Computing, 2013

2012
Randomized versus Deterministic Implementations of Concurrent Data Structures.
PhD thesis, 2012

Generating Fast Indulgent Algorithms.
Theory Comput. Syst., 2012

Of Choices, Failures and Asynchrony: The Many Faces of Set Agreement.
Algorithmica, 2012

On the cost of composing shared-memory algorithms.
Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures, 2012

Early Deciding Synchronous Renaming in O( logf ) Rounds or Less.
Proceedings of the Structural Information and Communication Complexity, 2012

How to Allocate Tasks Asynchronously.
Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science, 2012

2011
Sub-logarithmic Test-and-Set against a Weak Adversary.
Proceedings of the Distributed Computing - 25th International Symposium, 2011

Optimal-time adaptive strong renaming, with applications to counting.
Proceedings of the 30th Annual ACM Symposium on Principles of Distributed Computing, 2011

The Complexity of Renaming.
Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science, 2011

2010
Brief Announcement: New Bounds for Partially Synchronous Set Agreement.
Proceedings of the Distributed Computing, 24th International Symposium, 2010

Fast Randomized Test-and-Set and Renaming.
Proceedings of the Distributed Computing, 24th International Symposium, 2010

Securing every bit: authenticated broadcast in radio networks.
Proceedings of the SPAA 2010: Proceedings of the 22nd Annual ACM Symposium on Parallelism in Algorithms and Architectures, 2010

How Efficient Can Gossip Be? (On the Cost of Resilient Information Exchange).
Proceedings of the Automata, Languages and Programming, 37th International Colloquium, 2010

2008
How to Solve Consensus in the Smallest Window of Synchrony.
Proceedings of the Distributed Computing, 22nd International Symposium, 2008


  Loading...