SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training
☆36Mar 1, 2023Updated 3 years ago
Alternatives and similar repositories for SHADE
Users that are interested in SHADE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆38Jan 15, 2021Updated 5 years ago
- ☆23Oct 31, 2023Updated 2 years ago
- ☆23Jun 21, 2023Updated 2 years ago
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Dec 4, 2023Updated 2 years ago
- ☆42Jun 13, 2025Updated 9 months ago
- Artifacts of VLDB'22 paper "COMET: A Novel Memory-Efficient Deep Learning TrainingFramework by Using Error-Bounded Lossy Compression"☆10Aug 2, 2022Updated 3 years ago
- Near-optimal Prefetching System☆33Nov 17, 2021Updated 4 years ago
- ☆56Jan 25, 2021Updated 5 years ago
- [MSST '24] SAS-Cache: A Semantic-Aware Secondary Cache for LSM-based Key-Value Stores☆12Jun 3, 2024Updated last year
- ☆21Aug 13, 2024Updated last year
- Repository for FAST'23 paper GL-Cache: Group-level Learning for Efficient and High-Performance Caching☆51May 12, 2023Updated 2 years ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆41Jul 10, 2024Updated last year
- ☆13May 9, 2023Updated 2 years ago
- ☆31May 31, 2023Updated 2 years ago
- ☆52Dec 13, 2022Updated 3 years ago
- HDF5 Cache VOL connector for caching data on fast storage layers and moving data asynchronously to the parallel file system to hide I/O o…☆21Feb 10, 2026Updated last month
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- ☆31Feb 22, 2024Updated 2 years ago
- [MSST '24] Prophet: Optimizing LSM-Based Key-Value Store on ZNS SSDs with File Lifetime Prediction and Compaction Compensation.☆15Apr 20, 2024Updated last year
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆17Aug 21, 2023Updated 2 years ago
- A memcomparable serialization format.☆24May 16, 2023Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- ☆13Apr 7, 2025Updated 11 months ago
- "JABAS: Joint Adaptive Batching and Automatic Scaling for DNN Training on Heterogeneous GPUs" (EuroSys '25)☆16Apr 7, 2025Updated 11 months ago
- Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)☆44Jul 1, 2023Updated 2 years ago
- The design and algorithms used in LeCaR are described in this USENIX HotStorage'18 paper and talk slides: https://www.usenix.org/conferen…☆28Jun 4, 2020Updated 5 years ago
- 一门公开课《MIT6.824》的大作业☆12Jun 21, 2021Updated 4 years ago
- Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)☆23May 9, 2024Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Aug 21, 2024Updated last year
- ☆20Jun 3, 2023Updated 2 years ago
- ☆96Jan 22, 2026Updated 2 months ago
- GeminiFS: A Companion File System for GPUs☆72Feb 18, 2025Updated last year
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆40Mar 17, 2024Updated 2 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆47Nov 24, 2022Updated 3 years ago
- [ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Y…☆34Mar 15, 2023Updated 3 years ago
- OpenEmbedding is an open source framework for Tensorflow distributed training acceleration.☆33Apr 13, 2023Updated 2 years ago
- An implementation of online data mixing for the Pile dataset, based on the GPT-NeoX library.☆13Jan 9, 2024Updated 2 years ago
- Benchmark implementation of CosmoFlow in TensorFlow Keras☆22Feb 7, 2024Updated 2 years ago