FlexFlow Serve: Low-Latency, High-Performance LLM Serving
☆83Sep 15, 2025Updated 7 months ago
Alternatives and similar repositories for flexflow-serve
Users that are interested in flexflow-serve are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Jan 7, 2025Updated last year
- Development repository for integrating FlexFlow (A distributed deep learning framework that supports flexible parallelization strategies)…☆29Oct 12, 2021Updated 4 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,873Updated this week
- ☆34Jun 22, 2024Updated last year
- Vortex: A Flexible and Efficient Sparse Attention Framework☆52Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- A throughput-oriented high-performance serving framework for LLMs☆954Mar 29, 2026Updated last month
- Prefix-Aware Attention for LLM Decoding☆35Mar 31, 2026Updated last month
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆480May 30, 2025Updated 10 months ago
- scalable and robust tree-based speculative decoding algorithm☆377Jan 28, 2025Updated last year
- Compression for Foundation Models☆35Jul 21, 2025Updated 9 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆87Dec 7, 2025Updated 4 months ago
- High performance Transformer implementation in C++.☆154Jan 18, 2025Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 8 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆46Feb 27, 2025Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆327Jun 10, 2025Updated 10 months ago
- ☆132Nov 11, 2024Updated last year
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆56Mar 5, 2025Updated last year
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- ☆178Jul 15, 2025Updated 9 months ago
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆12Oct 16, 2022Updated 3 years ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,218Apr 19, 2026Updated last week
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆94Jul 14, 2023Updated 2 years ago
- LLM Inference on consumer devices☆130Mar 17, 2025Updated last year
- Stateful LLM Serving☆99Mar 11, 2025Updated last year
- ☆12Dec 1, 2023Updated 2 years ago
- ☆98Mar 26, 2025Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- ☆17May 10, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆21Jun 9, 2025Updated 10 months ago
- ☆47Mar 15, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- TLLM_QMM strips the implementation of quantized kernels of Nvidia's TensorRT-LLM, removing NVInfer dependency and exposes ease of use Pyt…☆16Jul 5, 2024Updated last year
- Simulator for LLM inference on an abstract 3D AIMC-based accelerator☆28Sep 18, 2025Updated 7 months ago
- OneFlow Serving☆20Apr 10, 2025Updated last year
- ☆66Dec 3, 2024Updated last year