High performance Transformer implementation in C++.
☆152Jan 18, 2025Updated last year
Alternatives and similar repositories for SwiftTransformer
Users that are interested in SwiftTransformer are comparing it to the libraries listed below
Sorting:
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- ☆131Nov 11, 2024Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆314Jun 10, 2025Updated 8 months ago
- A low-latency & high-throughput serving engine for LLMs☆480Jan 8, 2026Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆946Oct 29, 2025Updated 4 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆463May 30, 2025Updated 9 months ago
- KV cache store for distributed LLM inference☆392Nov 13, 2025Updated 3 months ago
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- A large-scale simulation framework for LLM inference☆539Jul 25, 2025Updated 7 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆410Feb 11, 2026Updated 2 weeks ago
- This is the implementation repository of our OSDI'23 paper: SMART: A High-Performance Adaptive Radix Tree for Disaggregated Memory.☆63Oct 28, 2024Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- ☆150Oct 9, 2024Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆273Aug 6, 2025Updated 6 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,261Aug 28, 2025Updated 6 months ago
- ☆13Jan 7, 2025Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆816Mar 6, 2025Updated 11 months ago
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆469Feb 21, 2026Updated last week
- ☆152Jan 9, 2025Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Jun 5, 2024Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆174Jul 10, 2024Updated last year
- ☆97Mar 26, 2025Updated 11 months ago
- Automatic resource configuration for serverless workflows.☆21Mar 24, 2024Updated last year
- A lightweight design for computation-communication overlap.☆221Jan 20, 2026Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Mar 13, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,901Feb 20, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,009Updated this week
- A model serving framework for various research and production scenarios. Seamlessly built upon the PyTorch and HuggingFace ecosystem.☆23Oct 11, 2024Updated last year
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆55Jan 12, 2026Updated last month
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- Nsight Compute In Docker☆13Dec 21, 2023Updated 2 years ago
- Implementation of the logging layer of our SOSP '23 paper Halfmoon☆11Jul 28, 2023Updated 2 years ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆74Sep 15, 2025Updated 5 months ago
- ☆78May 4, 2021Updated 4 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆77Oct 15, 2025Updated 4 months ago