flexflow / flexflow-serveLinks
FlexFlow Serve: Low-Latency, High-Performance LLM Serving
☆48Updated last week
Alternatives and similar repositories for flexflow-serve
Users that are interested in flexflow-serve are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆154Updated last month
- High performance Transformer implementation in C++.☆128Updated 6 months ago
- ☆65Updated last year
- Stateful LLM Serving☆77Updated 4 months ago
- ☆109Updated 8 months ago
- nnScaler: Compiling DNN models for Parallel Training☆114Updated this week
- ☆60Updated 3 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆103Updated 2 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆170Updated 10 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆62Updated last year
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆56Updated last week
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆183Updated 6 months ago
- DeeperGEMM: crazy optimized version☆70Updated 2 months ago
- ☆81Updated 4 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆215Updated 3 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆91Updated 2 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆216Updated last year
- ☆89Updated 2 months ago
- ☆96Updated 10 months ago
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆57Updated 2 months ago
- DeepSeek-V3/R1 inference performance simulator☆156Updated 4 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 8 months ago
- kvcached: Elastic KV cache for dynamic GPU sharing and efficient multi-LLM inference.☆33Updated this week
- ☆101Updated 7 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆125Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆62Updated last month
- ☆150Updated last year
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆36Updated this week
- Perplexity GPU Kernels☆413Updated 2 weeks ago
- ☆50Updated 2 months ago