erkkimon / vllamaView on GitHub
vllama is an open source hybrid server that combines Ollama's seamless model management with vLLM's lightning-fast GPU inference, delivering a drop-in OpenAI-compatible API for optimized performance.
65Nov 21, 2025Updated 3 months ago

Alternatives and similar repositories for vllama

Users that are interested in vllama are comparing it to the libraries listed below

Sorting:

Are these results useful?