erkkimon / vllamaView on GitHub
vllama is an open source hybrid server that combines Ollama's seamless model management with vLLM's lightning-fast GPU inference, delivering a drop-in OpenAI-compatible API for optimized performance.
68Nov 21, 2025Updated 4 months ago

Alternatives and similar repositories for vllama

Users that are interested in vllama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?