Compiler for Dynamic Neural Networks
☆45Nov 13, 2023Updated 2 years ago
Alternatives and similar repositories for brainstorm
Users that are interested in brainstorm are comparing it to the libraries listed below
Sorting:
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆94Jul 14, 2023Updated 2 years ago
- Cavs: An Efficient Runtime System for Dynamic Neural Networks☆15Sep 18, 2020Updated 5 years ago
- ATC23 AE☆46May 11, 2023Updated 2 years ago
- FGNN's artifact evaluation (EuroSys 2022)☆18Apr 25, 2022Updated 3 years ago
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Nov 17, 2025Updated 3 months ago
- HeliosArtifact☆22Sep 27, 2022Updated 3 years ago
- ☆17May 10, 2024Updated last year
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆137Jul 25, 2024Updated last year
- ☆84Dec 2, 2022Updated 3 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41May 31, 2021Updated 4 years ago
- ☆18Apr 21, 2024Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆284Feb 26, 2026Updated last week
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆56May 29, 2024Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43May 29, 2022Updated 3 years ago
- DietCode Code Release☆65Jul 21, 2022Updated 3 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆47Nov 24, 2022Updated 3 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆68May 1, 2024Updated last year
- TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19May 12, 2024Updated last year
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆19May 28, 2024Updated last year
- A Streaming-Native Serving Engine for TTS/STS Models☆56Feb 22, 2026Updated last week
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆104Dec 24, 2022Updated 3 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆55May 10, 2024Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Jan 9, 2023Updated 3 years ago
- Prefix-Aware Attention for LLM Decoding☆29Jan 23, 2026Updated last month
- An interference-aware scheduler for fine-grained GPU sharing☆159Nov 26, 2025Updated 3 months ago
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32May 15, 2024Updated last year
- Kubernetes Scheduler for Deep Learning☆264May 22, 2022Updated 3 years ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Apr 28, 2023Updated 2 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Aug 21, 2024Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Jun 5, 2024Updated last year
- ☆38Jun 27, 2025Updated 8 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25May 12, 2025Updated 9 months ago
- ☆23Oct 31, 2023Updated 2 years ago
- ☆146Dec 19, 2025Updated 2 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆969Dec 21, 2025Updated 2 months ago