[ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining
☆12Dec 4, 2023Updated 2 years ago
Alternatives and similar repositories for dear_pytorch
Users that are interested in dear_pytorch are comparing it to the libraries listed below
Sorting:
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Apr 28, 2023Updated 2 years ago
- Layer-wise Sparsification of Distributed Deep Learning☆10Jul 6, 2020Updated 5 years ago
- Source code of ICLR2020 submisstion: Zeno++: Robust Fully Asynchronous SGD☆14Feb 2, 2020Updated 6 years ago
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Nov 1, 2021Updated 4 years ago
- A computation-parallel deep learning architecture.☆13Sep 25, 2019Updated 6 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- ☆17May 10, 2024Updated last year
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Jul 23, 2024Updated last year
- Pytorch distributed backend extension with compression support☆17Mar 24, 2025Updated 11 months ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆20Jul 30, 2024Updated last year
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Jun 8, 2023Updated 2 years ago
- High performance NCCL plugin for Bagua.☆15Sep 15, 2021Updated 4 years ago
- Official Pytorch implementation of "DBS: Dynamic Batch Size for Distributed Deep Neural Network Training"☆23Sep 30, 2021Updated 4 years ago
- Utilities for PyTorch distributed☆25Feb 27, 2025Updated last year
- A Linux Kernel module implementing support for CCP congestion control algorithms☆23Sep 17, 2025Updated 5 months ago
- ☆68Mar 14, 2023Updated 2 years ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆62Jul 1, 2022Updated 3 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Dec 10, 2022Updated 3 years ago
- ☆33Mar 31, 2021Updated 4 years ago
- Ancestral Gumbel-Top-k Sampling☆25Apr 11, 2020Updated 5 years ago
- Prefix-Aware Attention for LLM Decoding☆29Jan 23, 2026Updated last month
- Primus-SaFE(Stability and Fault Endurance)☆52Updated this week
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆36Mar 1, 2023Updated 3 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36May 29, 2020Updated 5 years ago
- netbeacon - monitoring your network capture, NIDS or network analysis process☆19Oct 26, 2013Updated 12 years ago
- Code accompanying the NeurIPS 2019 paper AutoAssist: A Framework to Accelerate Training of Deep Neural Networks.☆14Oct 3, 2022Updated 3 years ago
- Automatic stabilizing and auto-piloting system for RC flying wing☆14Mar 3, 2016Updated 10 years ago
- Directed masked autoencoders☆14Feb 20, 2026Updated 2 weeks ago
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆10Feb 10, 2022Updated 4 years ago
- Distributed, Replicated, Protocol-generic Key-value Store in Async Rust For SMR Protocols Research☆17Updated this week
- How to plot for papers, slides, demos, etc.☆10Apr 7, 2022Updated 3 years ago
- For our ISSTA'23 paper ACETest: Automated Constraint Extraction for Testing Deep Learning Operators☆13Mar 30, 2024Updated last year
- Peking University Convex Optimization Course given by Professor Wen Zaiwen☆11Jan 11, 2018Updated 8 years ago
- Jieba 0.39 的 Java 复刻版,支持原版 Jieba 的所有核心功能☆12Feb 14, 2019Updated 7 years ago
- 🕹 Implementation for the lesson Compiling Engineering(2020 Spring) in Peking University, adjusted from UCLA CS 132 Project.☆10Jun 21, 2020Updated 5 years ago
- ☆11Oct 21, 2023Updated 2 years ago
- Proposal for the next generation of course-oriented IR.☆10Dec 24, 2021Updated 4 years ago
- FPGA-based HyperLogLog Accelerator☆12Jul 13, 2020Updated 5 years ago
- ☆11Apr 3, 2023Updated 2 years ago