zhuangwang93 / EspressoView external linksLinks
Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '23)
☆15Sep 21, 2023Updated 2 years ago
Alternatives and similar repositories for Espresso
Users that are interested in Espresso are comparing it to the libraries listed below
Sorting:
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Nov 17, 2025Updated 3 months ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Apr 28, 2023Updated 2 years ago
- ☆10Jun 4, 2021Updated 4 years ago
- Layer-wise Sparsification of Distributed Deep Learning☆10Jul 6, 2020Updated 5 years ago
- We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while mai…☆10Nov 14, 2021Updated 4 years ago
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Nov 1, 2021Updated 4 years ago
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- Paper list of federated learning: About system design☆13Apr 13, 2022Updated 3 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Dec 10, 2022Updated 3 years ago
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Dec 4, 2023Updated 2 years ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Oct 27, 2023Updated 2 years ago
- Optimizing data-intensive systems in disaggregated data centers☆13Jun 13, 2022Updated 3 years ago
- ☆68Mar 14, 2023Updated 2 years ago
- This is an official GitHub repository for the paper, "Towards timeout-less transport in commodity datacenter networks.".☆16Oct 12, 2021Updated 4 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Jul 23, 2024Updated last year
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆20Jul 30, 2024Updated last year
- ☆21Apr 2, 2023Updated 2 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated last year
- QStack,a high-concurrency-and-low-latency user-level TCP stack for multicore systems, which can handle TCP concurrrent connection in 10 m…☆20Jul 27, 2023Updated 2 years ago
- Switch ML Application☆200Jul 15, 2022Updated 3 years ago
- Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)☆23May 9, 2024Updated last year
- https://rs3lab.github.io/SynCord/☆26Nov 23, 2022Updated 3 years ago
- ☆20Jun 29, 2022Updated 3 years ago
- ☆53Oct 14, 2023Updated 2 years ago
- ☆26Aug 31, 2023Updated 2 years ago
- ☆27Mar 2, 2023Updated 2 years ago
- ☆24Jul 7, 2024Updated last year
- ☆25Jan 29, 2019Updated 7 years ago
- Hermit: Low-Latency, High-Throughput, and Transparent Remote Memory via Feedback-Directed Asynchrony☆34May 29, 2024Updated last year
- A Hybrid Framework to Build High-performance Adaptive Neural Networks for Kernel Datapath☆28May 15, 2023Updated 2 years ago
- Scaling Up Memory Disaggregated Applications with SMART☆34Apr 23, 2024Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- ☆36Jan 21, 2021Updated 5 years ago
- ☆33Mar 31, 2021Updated 4 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated 10 months ago
- ☆85Dec 13, 2021Updated 4 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆36Mar 1, 2023Updated 2 years ago
- netbeacon - monitoring your network capture, NIDS or network analysis process☆19Oct 26, 2013Updated 12 years ago