vllm-project / ci-infraLinks
This repo hosts code for vLLM CI & Performance Benchmark infrastructure.
☆26Updated this week
Alternatives and similar repositories for ci-infra
Users that are interested in ci-infra are comparing it to the libraries listed below
Sorting:
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆187Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆321Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆130Updated 2 months ago
- LM engine is a library for pretraining/finetuning LLMs☆77Updated last week
- Perplexity open source garden for inference technology☆274Updated last week
- Common recipes to run vLLM☆245Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆132Updated last week
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated this week
- An early research stage MoE load balancer based on inear programming.☆415Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆234Updated this week
- Benchmark suite for LLMs from Fireworks.ai☆84Updated this week
- A collection of all available inference solutions for the LLMs☆92Updated 8 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆132Updated 11 months ago
- ☆267Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆61Updated last week
- Pipeline parallelism for the minimalist☆37Updated 3 months ago
- vLLM performance dashboard☆38Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆262Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 8 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 2 months ago
- ☆42Updated this week
- KV cache compression for high-throughput LLM inference☆143Updated 9 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆347Updated 6 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆207Updated 6 months ago
- HuggingFace conversion and training library for Megatron-based models☆228Updated this week
- PyTorch-native post-training at scale☆546Updated last week
- ☆317Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆201Updated last week
- Salesforce AI Research's open diffusion language model☆54Updated last month
- torchcomms: a modern PyTorch communications API☆295Updated this week