akx / ollama-dlLinks
Download models from the Ollama library, without Ollama
☆119Updated last year
Alternatives and similar repositories for ollama-dl
Users that are interested in ollama-dl are comparing it to the libraries listed below
Sorting:
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆167Updated 8 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 11 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆321Updated 2 weeks ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆227Updated last week
- LM inference server implementation based on *.cpp.☆294Updated last month
- Nginx proxy server in a Docker container to Authenticate & Proxy requests to Ollama from Public Internet via Cloudflare Tunnel☆156Updated 4 months ago
- Aggregates compute from spare GPU capacity☆184Updated last week
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- ☆108Updated 4 months ago
- Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and…☆50Updated 7 months ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆118Updated last year
- Link you Ollama models to LM-Studio☆150Updated last year
- Wraps any OpenAI API interface as Responses with MCPs support so it supports Codex. Adding any missing stateful features. Ollama and Vllm…☆139Updated 2 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆87Updated this week
- LLM inference in C/C++☆104Updated last month
- A simple to use Ollama autocompletion engine with options exposed and streaming functionality☆140Updated 9 months ago
- ☆210Updated last week
- Benchmark your local LLMs.☆51Updated last year
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆267Updated 10 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- On-device LLM Inference Powered by X-Bit Quantization☆274Updated this week
- Code execution utilities for Open WebUI & Ollama☆313Updated last year
- An endpoint server for efficiently serving quantized open-source LLMs for code.☆58Updated 2 years ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆273Updated 2 weeks ago
- Create Linux commands from natural language, in the shell.☆121Updated 4 months ago
- A proxy server for multiple ollama instances with Key security☆561Updated 2 months ago
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆187Updated 3 weeks ago
- Something similar to Apple Intelligence?☆59Updated last year
- a curated collection of models ready-to-use with LocalAI☆268Updated last year
- Serving LLMs in the HF-Transformers format via a PyFlask API☆72Updated last year