Here is a list of our publications, grouped (roughly) by topic.
Large-Scale Distributed Machine Learning
Asynchronous Decentralized SGD with Quantized and Local Updates
Giorgi Nadiradze, Amirmojtaba Sabour, Peter Davies, Shigang Li, Dan Alistarh.
In Neural Information Processing Systems (NeurIPS 2021).
Distributed Principal Component Analysis with Limited Communication
Foivos Alimisis, Peter Davies, Bart Vandereycken, Dan Alistarh.
In Neural Information Processing Systems (NeurIPS 2021).
Towards Tight Communication Lower Bounds for Distributed Optimization
Janne H. Korhonen, Dan Alistarh.
In Neural Information Processing Systems (NeurIPS 2021).
New Bounds for Distributed Mean Estimation and Variance Reduction
Peter Davies, Vijaykrishna Gurunanthan, Niusha Moshrefi, Saleh Ashkboos, Dan Alistarh.
In the International Conference on Learning Representations (ICLR 2021).
Byzantine-Resilient Non-Convex Stochastic Gradient Descent
Dan Alistarh, Zeyuan Allen-Zhu, Faeze Ebrahimian, Jerry Li.
In the International Conference on Learning Representations (ICLR 2021).
Communication-Efficient Distributed Optimization with Quantized Preconditioners
Foivos Alimisis, Peter Davies, Dan Alistarh.
In the International Conference on Machine Learning (ICML 2021).
Elastic Consistency: A Practical Consistency Model for Distributed SGD
Giorgi Nadiradze, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh.
In the AAAI Conference on Artificial Intelligence (AAAI 2021).
Taming Unbalanced Training Workloads in Deep Learning with Partial Collectives
with Shigang Li, Tal Ben-Nun, Salvatore di Girolamo, and Torsten Hoefler
in PPOPP 2020. Shortlisted for Best Paper Award.
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging
Shigang Li, Tal Ben-Nun, Dan Alistarh, Salvatore Di Girolamo, Nikoli Dryden, Torsten Hoefler: CoRR abs/2005.00124 (2020)
Accepted to IEEE Transactions on Parallel and Distributed Systems.
SparCML: High-Performance Sparse Communication for Machine Learning
with Cedric Renggli, Saleh Ashkboos, Mehdi Aghagholzadeh, and Torsten Hoefler
In Supercomputing (SC) 2019.
On the Sample Complexity of Adversarial Multi-Source PAC Learning
Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph H. Lampert
CoRR abs/2002.10384 (2020)
In ICML 2020.
Model Compression
Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training
Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste.
In the Journal of Machine Learning Research (JMLR), 2021.
Basis for a Tutorial at the International Conference on Machine Learning (ICML),2021.
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information
Elias Frantar, Eldar Kurtic, Dan Alistarh.
In Neural Information Processing Systems (NeurIPS 2021). To appear.
AC/DC: Alternating Compressed/Decompressed Training of Deep Neural Networks
Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh.
In Neural Information Processing Systems (NeurIPS 2021).
WoodFisher: Efficient Second-order Approximation for Neural Network Compression
Sidak Pal Singh, Dan Alistarh.
In Neural Information Processing Systems (NeurIPS 2020).
Distribution-Adaptive Data Structures
The Splay-List: A Distribution-Adaptive Concurrent Skip-List.
Vitaly Aksenov, Dan Alistarh, Alexandra Drozdova, Amirkeivan Mohtashami. In DISC 2020: 3:1-3:18.
Invited to Special Issue of “Distributed Computing” for DISC 2020.
Non-Blocking Concurrent Interpolation Search
with Trevor Brown and Aleksandar Prokopec
in PPOPP 2020.
Best Paper Award.
[full version]
In Search of the Fastest Concurrent Union-Find Algorithm
with Alexander Fedorov and Nikita Koval
in OPODIS 2019.
Best Paper Award.
Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers
with Giorgi Nadiradze and Nikita Koval
in SPAA 2019.