Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k communication volume which is asymptotically optimal) with the decentralized parallel Stochastic Gradient Descent (SGD) optimizer, and its convergence is proved theoretically and empirically.
☆27Dec 10, 2022Updated 3 years ago
Alternatives and similar repositories for Ok-Topk
Users that are interested in Ok-Topk are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Nov 1, 2021Updated 4 years ago
- [PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and min…☆10Aug 13, 2024Updated last year
- C++/MPI proxies for distributed training of deep neural networks.☆15Jun 18, 2022Updated 3 years ago
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆14Dec 9, 2024Updated last year
- Artifacts of VLDB'22 paper "COMET: A Novel Memory-Efficient Deep Learning TrainingFramework by Using Error-Bounded Lossy Compression"☆10Aug 2, 2022Updated 3 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆150Oct 29, 2024Updated last year
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Nov 15, 2019Updated 6 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- A computation-parallel deep learning architecture.☆13Sep 25, 2019Updated 6 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆92Nov 23, 2022Updated 3 years ago
- C++17 implementation of einops for libtorch - clear and reliable tensor manipulations with einstein-like notation☆11Oct 16, 2023Updated 2 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Jul 23, 2024Updated last year
- ☆14Jun 4, 2024Updated last year
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆25Nov 26, 2019Updated 6 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- ☆14Nov 7, 2025Updated 5 months ago
- ☆12Dec 26, 2024Updated last year
- ☆69Mar 14, 2023Updated 3 years ago
- ☆10Jun 4, 2021Updated 4 years ago
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆12Jul 13, 2023Updated 2 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆37Aug 19, 2019Updated 6 years ago
- Take your first step in writing a compiler. Implemented in Rust.☆16Apr 17, 2023Updated 2 years ago
- Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?☆15Mar 24, 2022Updated 4 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆20Jun 29, 2022Updated 3 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Oct 25, 2018Updated 7 years ago
- ☆21Aug 13, 2024Updated last year
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Dec 4, 2023Updated 2 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆37Oct 3, 2022Updated 3 years ago
- ☆15Jul 13, 2021Updated 4 years ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆53Jul 21, 2025Updated 8 months ago
- ☆13Nov 25, 2022Updated 3 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆10Apr 29, 2023Updated 2 years ago
- ☆22Jun 1, 2025Updated 10 months ago
- A rust-based benchmark for BlueField SmartNICs.☆30Jul 5, 2023Updated 2 years ago
- ☆10Jan 31, 2022Updated 4 years ago
- All exercises from private AI Facebook Scholarship☆13Jul 21, 2019Updated 6 years ago
- Simple example wire app☆14Oct 14, 2021Updated 4 years ago
- [ICML 2025] Efficiently Serving Large Multimodal Models Using EPD Disaggregation☆23May 29, 2025Updated 10 months ago