Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k communication volume which is asymptotically optimal) with the decentralized parallel Stochastic Gradient Descent (SGD) optimizer, and its convergence is proved theoretically and empirically.
☆27Dec 10, 2022Updated 3 years ago
Alternatives and similar repositories for Ok-Topk
Users that are interested in Ok-Topk are comparing it to the libraries listed below
Sorting:
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Nov 1, 2021Updated 4 years ago
- [PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and min…☆10Aug 13, 2024Updated last year
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆13Dec 9, 2024Updated last year
- We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while mai…☆10Nov 14, 2021Updated 4 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- C++/MPI proxies for distributed training of deep neural networks.☆15Jun 18, 2022Updated 3 years ago
- ☆14Nov 7, 2025Updated 3 months ago
- ☆68Mar 14, 2023Updated 2 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Oct 29, 2024Updated last year
- ☆20Jun 29, 2022Updated 3 years ago
- ☆21Jun 6, 2024Updated last year
- The source code of the paper "Compressed Federated Learning Based on Adaptive Local Differential Privacy".☆10Oct 23, 2023Updated 2 years ago
- ☆10Jun 4, 2021Updated 4 years ago
- Artifacts of VLDB'22 paper "COMET: A Novel Memory-Efficient Deep Learning TrainingFramework by Using Error-Bounded Lossy Compression"☆10Aug 2, 2022Updated 3 years ago
- The (open-source part of) code to reproduce "BPPSA: Scaling Back-propagation by Parallel Scan Algorithm".☆13Jun 7, 2021Updated 4 years ago
- ☆10Apr 29, 2023Updated 2 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated 11 months ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆25Nov 26, 2019Updated 6 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Nov 15, 2019Updated 6 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Jul 23, 2024Updated last year
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆12Jul 13, 2023Updated 2 years ago
- Take your first step in writing a compiler. Implemented in Rust.☆16Apr 17, 2023Updated 2 years ago
- C++17 implementation of einops for libtorch - clear and reliable tensor manipulations with einstein-like notation☆11Oct 16, 2023Updated 2 years ago
- A computation-parallel deep learning architecture.☆13Sep 25, 2019Updated 6 years ago
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- ☆19Jun 1, 2025Updated 9 months ago
- Paper list of federated learning: About system design☆13Apr 13, 2022Updated 3 years ago
- ☆12Dec 26, 2024Updated last year
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Dec 4, 2023Updated 2 years ago
- ☆15May 13, 2022Updated 3 years ago
- A rust-based benchmark for BlueField SmartNICs.☆30Jul 5, 2023Updated 2 years ago
- This is an official GitHub repository for the paper, "Towards timeout-less transport in commodity datacenter networks.".☆16Oct 12, 2021Updated 4 years ago
- A parallel programming model for online applications with complex synchronization requirements.☆16Jun 8, 2022Updated 3 years ago
- Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?☆15Mar 24, 2022Updated 3 years ago
- ☆38Oct 11, 2025Updated 4 months ago
- ☆14Jun 4, 2024Updated last year
- Switch-based Training Acceleration for Machine Learning (SwitchML)☆16Apr 13, 2021Updated 4 years ago
- ☆16Apr 22, 2025Updated 10 months ago
- ☆15Jul 13, 2021Updated 4 years ago