vllm-project / recipesLinks
Common recipes to run vLLM
☆245Updated last week
Alternatives and similar repositories for recipes
Users that are interested in recipes are comparing it to the libraries listed below
Sorting:
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆132Updated last week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆327Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆254Updated last week
- A framework for efficient model inference with omni-modality models☆466Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆730Updated this week
- Efficient LLM Inference over Long Sequences☆392Updated 5 months ago
- vLLM performance dashboard☆38Updated last year
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆851Updated last week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆691Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆234Updated this week
- ☆205Updated 6 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆210Updated 2 weeks ago
- ☆317Updated last week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆498Updated last week
- ☆317Updated this week
- vLLM Router☆51Updated last year
- KV cache compression for high-throughput LLM inference☆145Updated 9 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆130Updated 2 months ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆187Updated last week
- Benchmark suite for LLMs from Fireworks.ai☆84Updated last week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆347Updated 7 months ago
- The driver for LMCache core to run in vLLM☆58Updated 10 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆209Updated 6 months ago
- Load compute kernels from the Hub☆337Updated last week
- Inference server benchmarking tool☆130Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆132Updated last year
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated this week
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆205Updated last month
- HuggingFace conversion and training library for Megatron-based models☆228Updated this week
- An early research stage MoE load balancer based on inear programming.☆415Updated 2 weeks ago