erkkimon / vllama
View external linksLinks

vllama is an open source hybrid server that combines Ollama's seamless model management with vLLM's lightning-fast GPU inference, delivering a drop-in OpenAI-compatible API for optimized performance.
62Nov 21, 2025Updated 2 months ago

Alternatives and similar repositories for vllama

Users that are interested in vllama are comparing it to the libraries listed below

Sorting:

Are these results useful?