ArcticInference: vLLM plugin for high-throughput, low-latency inference
☆403Feb 24, 2026Updated last week
Alternatives and similar repositories for ArcticInference
Users that are interested in ArcticInference are comparing it to the libraries listed below
Sorting:
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆276Updated this week
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆91Feb 23, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- NVIDIA Inference Xfer Library (NIXL)☆898Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆817Mar 6, 2025Updated 11 months ago
- ☆20Jun 9, 2025Updated 8 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,025Sep 4, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,201Feb 20, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 2 weeks ago
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,272Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆880Updated this week
- ☆71Mar 26, 2025Updated 11 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆65Oct 2, 2025Updated 5 months ago
- Collection of kernels written in Triton language☆178Jan 27, 2026Updated last month
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- Perplexity GPU Kernels☆567Nov 7, 2025Updated 3 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- ☆13Jan 7, 2025Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,261Aug 28, 2025Updated 6 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆143Dec 4, 2024Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆249Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,145Feb 23, 2026Updated last week
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆42Feb 13, 2025Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆87Updated this week
- A selective knowledge distillation algorithm for efficient speculative decoders☆36Nov 27, 2025Updated 3 months ago
- ☆15Feb 24, 2026Updated last week
- Expert Specialization MoE Solution based on CUTLASS☆27Jan 19, 2026Updated last month
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,787Updated this week
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆93Dec 2, 2025Updated 3 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆369Apr 22, 2025Updated 10 months ago
- Materials for learning SGLang☆766Jan 5, 2026Updated 2 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆271Feb 20, 2026Updated last week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆380Feb 25, 2026Updated last week
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆51Jul 15, 2025Updated 7 months ago
- Tile primitives for speedy kernels☆3,202Feb 24, 2026Updated last week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆374Jul 10, 2025Updated 7 months ago