FlexFlow Serve: Low-Latency, High-Performance LLM Serving
☆77Sep 15, 2025Updated 6 months ago
Alternatives and similar repositories for flexflow-serve
Users that are interested in flexflow-serve are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Jan 7, 2025Updated last year
- Development repository for integrating FlexFlow (A distributed deep learning framework that supports flexible parallelization strategies)…☆29Oct 12, 2021Updated 4 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,870Mar 25, 2026Updated 2 weeks ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆50Updated this week
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 6 months ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ☆34Jun 22, 2024Updated last year
- Prefix-Aware Attention for LLM Decoding☆35Mar 31, 2026Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆953Mar 29, 2026Updated last week
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆470May 30, 2025Updated 10 months ago
- scalable and robust tree-based speculative decoding algorithm☆376Jan 28, 2025Updated last year
- Compression for Foundation Models☆35Jul 21, 2025Updated 8 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆84Dec 7, 2025Updated 4 months ago
- High performance Transformer implementation in C++.☆154Jan 18, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 8 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆320Jun 10, 2025Updated 10 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- ☆132Nov 11, 2024Updated last year
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆56Mar 5, 2025Updated last year
- ☆170Jul 15, 2025Updated 8 months ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆12Mar 27, 2026Updated last week
- Efficient and easy multi-instance LLM serving☆541Mar 12, 2026Updated 3 weeks ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,177Apr 2, 2026Updated last week
- ☆12Oct 16, 2022Updated 3 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆94Jul 14, 2023Updated 2 years ago
- LLM Inference on consumer devices☆131Mar 17, 2025Updated last year
- Stateful LLM Serving☆97Mar 11, 2025Updated last year
- ☆12Dec 1, 2023Updated 2 years ago
- Zplot demos☆21Nov 22, 2021Updated 4 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆19Jun 17, 2022Updated 3 years ago
- ☆98Mar 26, 2025Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- ☆20Jun 9, 2025Updated 10 months ago
- ☆47Mar 15, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- TLLM_QMM strips the implementation of quantized kernels of Nvidia's TensorRT-LLM, removing NVInfer dependency and exposes ease of use Pyt…☆16Jul 5, 2024Updated last year