Stateful LLM Serving
☆99Mar 11, 2025Updated last year
Alternatives and similar repositories for preble
Users that are interested in preble are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 3 months ago
- ☆34Jun 22, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆804Apr 6, 2025Updated last year
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- ☆47Jun 27, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆132Nov 11, 2024Updated last year
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆38Aug 29, 2025Updated 8 months ago
- [ICML 2025] Efficiently Serving Large Multimodal Models Using EPD Disaggregation☆24May 29, 2025Updated 11 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆327Jun 10, 2025Updated 10 months ago
- ☆13Jan 7, 2025Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆480May 30, 2025Updated 10 months ago
- Accurate, large-scale, and extensible simulator for LLM inference Systems☆595Jul 25, 2025Updated 9 months ago
- ☆158Oct 9, 2024Updated last year
- LLM serving cluster simulator☆150Apr 25, 2024Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆83Oct 15, 2025Updated 6 months ago
- LLM Serving Performance Evaluation Harness☆85Feb 25, 2025Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- An Open-Source SCAlable Interface for ISA Extensionsfor RISC-V Processors. New Version:☆17Feb 29, 2024Updated 2 years ago
- ☆104Apr 23, 2026Updated last week
- Large Language Model (LLM) Systems Paper List☆1,942Apr 17, 2026Updated last week
- ☆178Jul 15, 2025Updated 9 months ago
- Modular and structured prompt caching for low-latency LLM inference☆112Nov 9, 2024Updated last year
- MESMERIC: A Software-based NVM Emulator Supporting Read/Write Asymmetric Latencies☆10Oct 1, 2020Updated 5 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A throughput-oriented high-performance serving framework for LLMs☆954Mar 29, 2026Updated last month
- ☆19May 4, 2023Updated 2 years ago
- The Artifact Evaluation Version of SOSP Paper #19☆54Aug 19, 2024Updated last year
- Self-host LLMs with LMDeploy and BentoML☆22Dec 26, 2025Updated 4 months ago
- Materials for learning SGLang☆806Jan 5, 2026Updated 3 months ago
- PilotFish harvests the free GPU cycles of cloud gaming with deep learning training☆14Jul 2, 2022Updated 3 years ago
- Efficient LLM Inference over Long Sequences☆394Jun 25, 2025Updated 10 months ago
- NVIDIA Inference Xfer Library (NIXL)☆1,003Updated this week
- tensorflow fork with Salus integration☆12Jan 7, 2022Updated 4 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆636Sep 11, 2024Updated last year
- ☆72Mar 26, 2025Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- A tool for cross-checking Verilog compilers☆15Apr 16, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,186Updated this week
- ☆15Apr 13, 2024Updated 2 years ago
- Curated collection of papers in machine learning systems☆542Feb 7, 2026Updated 2 months ago