Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k communication volume which is asymptotically optimal) with the decentralized parallel Stochastic Gradient Descent (SGD) optimizer, and its convergence is proved theoretically and empirically.
☆27Dec 10, 2022Updated 3 years ago
Alternatives and similar repositories for Ok-Topk
Users that are interested in Ok-Topk are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and min…☆10Aug 13, 2024Updated last year
- We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while mai…☆10Nov 14, 2021Updated 4 years ago
- C++/MPI proxies for distributed training of deep neural networks.☆15Jun 18, 2022Updated 3 years ago
- The source code of the paper "Compressed Federated Learning Based on Adaptive Local Differential Privacy".☆10Oct 23, 2023Updated 2 years ago
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆14Dec 9, 2024Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated last year
- Artifacts of VLDB'22 paper "COMET: A Novel Memory-Efficient Deep Learning TrainingFramework by Using Error-Bounded Lossy Compression"☆10Aug 2, 2022Updated 3 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Oct 29, 2024Updated last year
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- A computation-parallel deep learning architecture.☆13Sep 25, 2019Updated 6 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆92Nov 23, 2022Updated 3 years ago
- C++17 implementation of einops for libtorch - clear and reliable tensor manipulations with einstein-like notation☆11Oct 16, 2023Updated 2 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Jul 23, 2024Updated last year
- ☆14Jun 4, 2024Updated last year
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆25Nov 26, 2019Updated 6 years ago
- ☆14Nov 7, 2025Updated 4 months ago
- ☆12Dec 26, 2024Updated last year
- NetworkX clone in JavaScript☆13Oct 17, 2016Updated 9 years ago
- ☆69Mar 14, 2023Updated 3 years ago
- ☆10Jun 4, 2021Updated 4 years ago
- PyDTNN - Python Distributed Training of Neural Networks☆14Feb 20, 2026Updated last month
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆37Aug 19, 2019Updated 6 years ago
- Take your first step in writing a compiler. Implemented in Rust.☆16Apr 17, 2023Updated 2 years ago
- Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?☆15Mar 24, 2022Updated 3 years ago
- ☆20Jun 29, 2022Updated 3 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Oct 25, 2018Updated 7 years ago
- ☆21Aug 13, 2024Updated last year
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Dec 4, 2023Updated 2 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆37Oct 3, 2022Updated 3 years ago
- ☆15Jul 13, 2021Updated 4 years ago
- Paper list of federated learning: About system design☆13Apr 13, 2022Updated 3 years ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆53Jul 21, 2025Updated 8 months ago
- ☆13Nov 25, 2022Updated 3 years ago
- ☆10Apr 29, 2023Updated 2 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆32Nov 20, 2020Updated 5 years ago
- ☆20Jun 1, 2025Updated 9 months ago
- A rust-based benchmark for BlueField SmartNICs.☆30Jul 5, 2023Updated 2 years ago
- ☆10Jan 31, 2022Updated 4 years ago
- All exercises from private AI Facebook Scholarship☆13Jul 21, 2019Updated 6 years ago