Snowflake-Labs / vllmLinks
☆15Updated last week
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆210Updated this week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Benchmark suite for LLMs from Fireworks.ai☆83Updated last week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated 4 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated 11 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆39Updated this week
- ☆31Updated 10 months ago
- A collection of reproducible inference engine benchmarks☆32Updated 4 months ago
- experiments with inference on llama☆104Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 11 months ago
- ☆48Updated last year
- ☆92Updated 3 weeks ago
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- XTR: Rethinking the Role of Token Retrieval in Multi-Vector Retrieval☆55Updated last year
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆65Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last month
- ☆49Updated 7 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated last week
- Cray-LM unified training and inference stack.☆22Updated 7 months ago
- The backend behind the LLM-Perf Leaderboard☆10Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated 2 weeks ago
- Make triton easier☆47Updated last year
- ☆37Updated 2 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆124Updated 9 months ago
- ☆56Updated 2 months ago
- ☆50Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- ☆46Updated last year
- ☆21Updated 6 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆17Updated 3 weeks ago