EmbeddedLLM / vllm

vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
86Updated this week

Alternatives and similar repositories for vllm

Users that are interested in vllm are comparing it to the libraries listed below

Sorting: