LLM-inference-router / vllm-router
vLLM Router
☆18Updated 10 months ago
Alternatives and similar repositories for vllm-router:
Users that are interested in vllm-router are comparing it to the libraries listed below
- Data preparation code for CrystalCoder 7B LLM☆44Updated 8 months ago
- Train, tune, and infer Bamba model☆80Updated 2 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆105Updated last month
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆36Updated last year
- ☆37Updated 3 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 7 months ago
- ☆52Updated 7 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆80Updated last week
- vLLM performance dashboard☆20Updated 9 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆34Updated 9 months ago
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆13Updated 4 months ago
- FuseAI Project☆80Updated this week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆155Updated last week
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆31Updated 2 months ago
- Self-host LLMs with LMDeploy and BentoML☆17Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆145Updated 7 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆139Updated 4 months ago
- Benchmark suite for LLMs from Fireworks.ai☆64Updated last month
- ☆80Updated 3 months ago
- ☆169Updated 3 months ago
- ☆57Updated this week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆38Updated 11 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 7 months ago
- ☆59Updated last month
- A pipeline for LLM knowledge distillation☆85Updated this week
- The official repo for "LLoCo: Learning Long Contexts Offline"☆114Updated 7 months ago
- LLM Serving Performance Evaluation Harness☆66Updated 5 months ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆76Updated 2 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 4 months ago