LLM-inference-router / vllm-router
vLLM Router
☆26Updated last year
Alternatives and similar repositories for vllm-router:
Users that are interested in vllm-router are comparing it to the libraries listed below
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆47Updated 5 months ago
- Benchmark suite for LLMs from Fireworks.ai☆70Updated 2 months ago
- KV cache compression for high-throughput LLM inference☆126Updated 2 months ago
- vLLM performance dashboard☆27Updated 11 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆131Updated 10 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆112Updated 4 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆82Updated last month
- ☆78Updated 3 weeks ago
- ☆53Updated 10 months ago
- ☆49Updated 4 months ago
- The driver for LMCache core to run in vLLM☆36Updated 2 months ago
- Self-host LLMs with LMDeploy and BentoML☆18Updated last month
- ☆185Updated 6 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated 11 months ago
- ☆241Updated this week
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆65Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆125Updated 2 weeks ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆165Updated 2 weeks ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆36Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Train, tune, and infer Bamba model☆88Updated 3 months ago
- ☆45Updated 9 months ago
- ☆190Updated last week
- ☆54Updated 6 months ago
- This is an NVIDIA AI Workbench example project that demonstrates an end-to-end model development workflow using Llamafactory.☆54Updated 6 months ago
- ☆43Updated 2 weeks ago
- ☆37Updated 6 months ago
- ☆74Updated 4 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 11 months ago