aws-neuron / upstreaming-to-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
12Updated this week

Alternatives and similar repositories for upstreaming-to-vllm:

Users that are interested in upstreaming-to-vllm are comparing it to the libraries listed below