Stateful LLM Serving
☆97Mar 11, 2025Updated last year
Alternatives and similar repositories for preble
Users that are interested in preble are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A low-latency & high-throughput serving engine for LLMs☆490Jan 8, 2026Updated 3 months ago
- ☆34Jun 22, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆798Apr 6, 2025Updated last year
- Efficient and easy multi-instance LLM serving☆541Mar 12, 2026Updated 3 weeks ago
- ☆47Jun 27, 2024Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- ☆132Nov 11, 2024Updated last year
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆37Aug 29, 2025Updated 7 months ago
- [ICML 2025] Efficiently Serving Large Multimodal Models Using EPD Disaggregation☆23May 29, 2025Updated 10 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆320Jun 10, 2025Updated 9 months ago
- ☆13Jan 7, 2025Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆470May 30, 2025Updated 10 months ago
- A large-scale simulation framework for LLM inference☆581Jul 25, 2025Updated 8 months ago
- ☆156Oct 9, 2024Updated last year
- LLM serving cluster simulator☆144Apr 25, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆81Oct 15, 2025Updated 5 months ago
- LLM Serving Performance Evaluation Harness☆84Feb 25, 2025Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- An Open-Source SCAlable Interface for ISA Extensionsfor RISC-V Processors. New Version:☆17Feb 29, 2024Updated 2 years ago
- ☆99Jan 22, 2026Updated 2 months ago
- ☆170Jul 15, 2025Updated 8 months ago
- Large Language Model (LLM) Systems Paper List☆1,902Mar 24, 2026Updated 2 weeks ago
- Modular and structured prompt caching for low-latency LLM inference☆110Nov 9, 2024Updated last year
- MESMERIC: A Software-based NVM Emulator Supporting Read/Write Asymmetric Latencies☆10Oct 1, 2020Updated 5 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- A throughput-oriented high-performance serving framework for LLMs☆953Mar 29, 2026Updated last week
- ☆19May 4, 2023Updated 2 years ago
- The Artifact Evaluation Version of SOSP Paper #19☆54Aug 19, 2024Updated last year
- Self-host LLMs with LMDeploy and BentoML☆22Dec 26, 2025Updated 3 months ago
- Materials for learning SGLang☆792Jan 5, 2026Updated 3 months ago
- NVIDIA Inference Xfer Library (NIXL)☆963Apr 2, 2026Updated last week
- PilotFish harvests the free GPU cycles of cloud gaming with deep learning training☆14Jul 2, 2022Updated 3 years ago
- Efficient LLM Inference over Long Sequences☆392Jun 25, 2025Updated 9 months ago
- tensorflow fork with Salus integration☆12Jan 7, 2022Updated 4 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆633Sep 11, 2024Updated last year
- ☆72Mar 26, 2025Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- A tool for cross-checking Verilog compilers☆15Apr 16, 2025Updated 11 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,039Updated this week
- ☆15Apr 13, 2024Updated last year
- Curated collection of papers in machine learning systems☆533Feb 7, 2026Updated 2 months ago