ParCIS / Chimera
Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.
☆59Updated last year
Alternatives and similar repositories for Chimera:
Users that are interested in Chimera are comparing it to the libraries listed below
- ☆72Updated 3 years ago
- nnScaler: Compiling DNN models for Parallel Training☆97Updated 2 weeks ago
- ☆75Updated 2 years ago
- ☆79Updated 3 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆147Updated 4 months ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆17Updated last year
- ☆50Updated 8 months ago
- LLM serving cluster simulator☆92Updated 10 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆142Updated 2 years ago
- ☆83Updated 3 months ago
- A resilient distributed training framework☆88Updated 10 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆74Updated 4 years ago
- High performance Transformer implementation in C++.☆103Updated last month
- Compiler for Dynamic Neural Networks☆45Updated last year
- ☆135Updated 7 months ago
- ☆79Updated 2 years ago
- An experimental parallel training platform☆54Updated 11 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 3 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated last year
- Synthesizer for optimal collective communication algorithms☆104Updated 10 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆52Updated 7 months ago
- ☆47Updated 2 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 8 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆98Updated 2 months ago
- Microsoft Collective Communication Library☆62Updated 3 months ago
- This repository is established to store personal notes and annotated papers during daily research.☆110Updated this week
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆86Updated 2 years ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆21Updated 9 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆201Updated last year