vllm-project / recipesLinks
Common recipes to run vLLM
☆214Updated last week
Alternatives and similar repositories for recipes
Users that are interested in recipes are comparing it to the libraries listed below
Sorting:
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆299Updated this week
- vLLM performance dashboard☆37Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆65Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆229Updated this week
- Efficient LLM Inference over Long Sequences☆390Updated 4 months ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆820Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆204Updated this week
- vLLM Router☆50Updated last year
- ☆56Updated 11 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆241Updated this week
- ☆205Updated 6 months ago
- Benchmark suite for LLMs from Fireworks.ai☆83Updated 2 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆460Updated last week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆628Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated last month
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆701Updated this week
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆203Updated 5 months ago
- ☆264Updated last week
- KV cache compression for high-throughput LLM inference☆143Updated 9 months ago
- The driver for LMCache core to run in vLLM☆56Updated 9 months ago
- Materials for learning SGLang☆636Updated 2 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆130Updated 11 months ago
- ☆144Updated 4 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆700Updated this week
- ☆121Updated last year
- Inference server benchmarking tool☆128Updated last month
- ☆309Updated last week
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆307Updated last week
- ☆97Updated 7 months ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆167Updated this week