vllm-project / recipesLinks
Common recipes to run vLLM
☆283Updated last week
Alternatives and similar repositories for recipes
Users that are interested in recipes are comparing it to the libraries listed below
Sorting:
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆354Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆160Updated last week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆871Updated this week
- vLLM performance dashboard☆39Updated last year
- A modern web interface for managing and interacting with vLLM servers (www.github.com/vllm-project/vllm). Supports both GPU and CPU modes…☆172Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆261Updated this week
- Efficient LLM Inference over Long Sequences☆394Updated 5 months ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆209Updated last week
- A framework for efficient model inference with omni-modality models☆1,335Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆249Updated 2 weeks ago
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆469Updated last month
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆726Updated 3 weeks ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆765Updated this week
- Utils for Unsloth https://github.com/unslothai/unsloth☆183Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- ☆273Updated this week
- ☆610Updated last week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆463Updated 7 months ago
- The driver for LMCache core to run in vLLM☆59Updated 10 months ago
- vLLM Router☆52Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆215Updated 6 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆557Updated last week
- ☆321Updated this week
- Community maintained hardware plugin for vLLM on Apple Silicon☆62Updated this week
- Advanced quantization toolkit for LLMs and VLMs. Support for WOQ, MXFP4, NVFP4, GGUF, Adaptive Schemes and seamless integration with Tra…☆775Updated this week
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated this week
- KV cache compression for high-throughput LLM inference☆148Updated 10 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆349Updated 7 months ago
- LLM KV cache compression made easy☆729Updated last week
- ☆56Updated last year