vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
26,822Updated this week

Related projects: