vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
30,423Updated this week

Related projects

Alternatives and complementary repositories for vllm