A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of vLLM).
☆314Jun 10, 2025Updated 8 months ago
Alternatives and similar repositories for swiftLLM
Users that are interested in swiftLLM are comparing it to the libraries listed below
Sorting:
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- High performance Transformer implementation in C++.☆152Jan 18, 2025Updated last year
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆84Jun 16, 2025Updated 8 months ago
- ☆131Nov 11, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆480Jan 8, 2026Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆946Oct 29, 2025Updated 4 months ago
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆464May 30, 2025Updated 9 months ago
- A large-scale simulation framework for LLM inference☆539Jul 25, 2025Updated 7 months ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- Stateful LLM Serving☆96Mar 11, 2025Updated 11 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆241Feb 1, 2026Updated last month
- Latency and Memory Analysis of Transformer Models for Training and Inference☆477Apr 19, 2025Updated 10 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆284Updated this week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆374Jul 10, 2025Updated 7 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆260Nov 18, 2024Updated last year
- paper and its code for AI System☆348Feb 10, 2026Updated 2 weeks ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆277Aug 31, 2024Updated last year
- Implement Flash Attention using Cute.☆101Dec 17, 2024Updated last year
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆12Nov 8, 2024Updated last year
- Large Language Model (LLM) Systems Paper List☆1,836Feb 8, 2026Updated 3 weeks ago
- LLM Serving Performance Evaluation Harness☆83Feb 25, 2025Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- The Next-gen Language & Compiler Powering Efficient Hardware Design☆36Jan 16, 2025Updated last year
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Mar 6, 2025Updated 11 months ago
- ☆118May 19, 2025Updated 9 months ago
- A Triton-only attention backend for vLLM☆24Feb 11, 2026Updated 2 weeks ago
- ☆65Apr 26, 2025Updated 10 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Mar 13, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,009Feb 23, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,361Feb 13, 2026Updated 2 weeks ago
- Codes & examples for "CUDA - From Correctness to Performance"☆123Oct 24, 2024Updated last year
- ☆87Oct 17, 2025Updated 4 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆661Updated this week
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 7 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- Tile-based language built for AI computation across all scales☆138Updated this week