lsds / Crossbow
Crossbow: A Multi-GPU Deep Learning System for Training with Small Batch Sizes
☆55Updated 2 years ago
Alternatives and similar repositories for Crossbow:
Users that are interested in Crossbow are comparing it to the libraries listed below
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- ☆21Updated 2 years ago
- ☆14Updated 4 years ago
- Analyze network performance in distributed training☆18Updated 4 years ago
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆59Updated 2 years ago
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated 2 years ago
- ☆47Updated 2 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 4 years ago
- ☆22Updated 5 years ago
- Cocytus is an efficient and available in-memory K/V-store through hybrid erasure coding and replication☆30Updated 9 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆32Updated 2 years ago
- My paper/code reading notes in Chinese☆46Updated 11 months ago
- Model-less Inference Serving☆88Updated last year
- ☆35Updated 4 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- ☆82Updated 2 years ago
- Tensorflow is a computational library using data flow graphs for scalable machine learning, and Tensorflow-RDMA is the implementation ov…☆58Updated 2 years ago
- Repository for SysML19 Artifacts Evaluation☆54Updated 6 years ago
- A Generic Resource-Aware Hyperparameter Tuning Execution Engine☆15Updated 3 years ago
- ☆16Updated 2 years ago
- ☆24Updated last year
- ☆44Updated 3 years ago
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆77Updated 3 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆128Updated 9 months ago
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆52Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- ☆21Updated 6 years ago
- Virtual Memory Abstraction for Serverless Architectures☆48Updated 3 years ago
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆62Updated 2 years ago