Stateful LLM Serving
☆96Mar 11, 2025Updated 11 months ago
Alternatives and similar repositories for preble
Users that are interested in preble are comparing it to the libraries listed below
Sorting:
- A low-latency & high-throughput serving engine for LLMs☆480Jan 8, 2026Updated last month
- ☆13Jan 7, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- ☆131Nov 11, 2024Updated last year
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆463May 30, 2025Updated 8 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆314Jun 10, 2025Updated 8 months ago
- [ICML 2025] Efficiently Serving Large Multimodal Models Using EPD Disaggregation☆22May 29, 2025Updated 8 months ago
- A tool for cross-checking Verilog compilers☆14Apr 16, 2025Updated 10 months ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆36Aug 29, 2025Updated 6 months ago
- The Artifact Evaluation Version of SOSP Paper #19☆52Aug 19, 2024Updated last year
- ☆47Jun 27, 2024Updated last year
- ☆34Jun 22, 2024Updated last year
- ☆150Oct 9, 2024Updated last year
- MESMERIC: A Software-based NVM Emulator Supporting Read/Write Asymmetric Latencies☆10Oct 1, 2020Updated 5 years ago
- LLM serving cluster simulator☆135Apr 25, 2024Updated last year
- https://rs3lab.github.io/SynCord/☆26Nov 23, 2022Updated 3 years ago
- A large-scale simulation framework for LLM inference☆539Jul 25, 2025Updated 7 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆77Oct 15, 2025Updated 4 months ago
- A Triton-only attention backend for vLLM☆24Feb 11, 2026Updated 2 weeks ago
- LLM Serving Performance Evaluation Harness☆83Feb 25, 2025Updated last year
- PilotFish harvests the free GPU cycles of cloud gaming with deep learning training☆14Jul 2, 2022Updated 3 years ago
- An Open-Source SCAlable Interface for ISA Extensionsfor RISC-V Processors. New Version:☆17Feb 29, 2024Updated 2 years ago
- Large Language Model (LLM) Systems Paper List☆1,836Feb 8, 2026Updated 2 weeks ago
- ☆164Jul 15, 2025Updated 7 months ago
- In-Memory Key-Value Store Live Migration with NetMigrate☆18Jun 22, 2024Updated last year
- ☆19May 4, 2023Updated 2 years ago
- ☆84Jan 22, 2026Updated last month
- ☆71Mar 26, 2025Updated 11 months ago
- A throughput-oriented high-performance serving framework for LLMs☆946Oct 29, 2025Updated 4 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆135Feb 22, 2024Updated 2 years ago
- ☆36Jan 21, 2021Updated 5 years ago
- NVIDIA Inference Xfer Library (NIXL)☆890Feb 20, 2026Updated last week
- Modular and structured prompt caching for low-latency LLM inference☆109Nov 9, 2024Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆74Sep 15, 2025Updated 5 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆284Updated this week
- The driver for LMCache core to run in vLLM☆60Feb 4, 2025Updated last year
- Self-host LLMs with LMDeploy and BentoML☆22Dec 26, 2025Updated 2 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Mar 13, 2024Updated last year