Snowflake-Labs / vllmLinks
☆16Updated 2 months ago
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- Benchmark suite for LLMs from Fireworks.ai☆83Updated last week
- A collection of reproducible inference engine benchmarks☆37Updated 7 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆70Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆252Updated this week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Cray-LM unified training and inference stack.☆22Updated 9 months ago
- The backend behind the LLM-Perf Leaderboard☆11Updated last year
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆179Updated this week
- ☆52Updated last year
- ☆48Updated last year
- vLLM adapter for a TGIS-compatible gRPC server.☆44Updated this week
- Make triton easier☆48Updated last year
- Simple high-throughput inference library☆149Updated 6 months ago
- Google TPU optimizations for transformers models☆122Updated 10 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- ☆47Updated last year
- ☆21Updated 8 months ago
- ☆31Updated last year
- experiments with inference on llama☆103Updated last year
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆65Updated 2 years ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 2 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆300Updated this week
- DPO, but faster 🚀☆46Updated 11 months ago
- ☆121Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆62Updated last month
- Example ML projects that use the Determined library.☆32Updated last year
- Easy and Efficient Quantization for Transformers☆202Updated 5 months ago
- RWKV-7: Surpassing GPT☆100Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- LM engine is a library for pretraining/finetuning LLMs☆77Updated this week