ParCIS / Ok-TopkLinks
Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k communication volume which is asymptotically optimal) with the decentralized parallel Stochastic Gradient Descent (SGD) optimizer, and its convergence is proved theoretically and empirically.
☆26Updated 2 years ago
Alternatives and similar repositories for Ok-Topk
Users that are interested in Ok-Topk are comparing it to the libraries listed below
Sorting:
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training (MLSys '23)☆9Updated last year
- ☆37Updated this week
- ☆14Updated 3 years ago
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆29Updated 3 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated last year
- ☆49Updated 6 months ago
- ☆19Updated 3 years ago
- ☆25Updated last year
- Artifacts for our SIGCOMM'22 paper Muri☆42Updated last year
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆19Updated 10 months ago
- ☆8Updated 3 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆39Updated last year
- ☆9Updated 2 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- ☆25Updated 2 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆49Updated 7 months ago
- ☆23Updated 2 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 10 months ago
- LLM serving cluster simulator☆106Updated last year
- ☆40Updated 4 years ago
- [ASPLOS'23] Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression☆6Updated 10 months ago
- ☆22Updated last year
- ☆50Updated 2 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 5 years ago
- ☆16Updated last year
- Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)☆22Updated last year
- GRACE - GRAdient ComprEssion for distributed deep learning☆140Updated 11 months ago