Snowflake-Labs / vllmLinks
☆16Updated 3 weeks ago
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- Benchmark suite for LLMs from Fireworks.ai☆84Updated 3 weeks ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 3 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- ☆53Updated last year
- ☆47Updated last year
- A collection of reproducible inference engine benchmarks☆38Updated 7 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆160Updated this week
- ☆31Updated last year
- Easy and Efficient Quantization for Transformers☆203Updated 5 months ago
- LM engine is a library for pretraining/finetuning LLMs☆77Updated this week
- ☆52Updated last year
- A Lossless Compression Library for AI pipelines☆289Updated 5 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last week
- Cray-LM unified training and inference stack.☆22Updated 10 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆257Updated this week
- Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainings☆101Updated 4 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated this week
- ☆21Updated 9 months ago
- Train, tune, and infer Bamba model☆137Updated 6 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- Make triton easier☆49Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- ☆48Updated last year
- ☆112Updated 3 weeks ago
- Google TPU optimizations for transformers models☆125Updated 10 months ago
- Simple high-throughput inference library☆152Updated 7 months ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated 10 months ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆202Updated last week