akx / ollama-dlLinks
Download models from the Ollama library, without Ollama
☆100Updated 10 months ago
Alternatives and similar repositories for ollama-dl
Users that are interested in ollama-dl are comparing it to the libraries listed below
Sorting:
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆160Updated 5 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 8 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆295Updated last month
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆207Updated last month
- Lightweight Inference server for OpenVINO☆212Updated this week
- ☆209Updated 3 weeks ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆260Updated 6 months ago
- LM inference server implementation based on *.cpp.☆276Updated last month
- LLM inference in C/C++☆102Updated last month
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆118Updated last year
- A platform to self-host AI on easy mode☆171Updated this week
- Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and…☆50Updated 4 months ago
- ☆101Updated last month
- A proxy server for multiple ollama instances with Key security☆498Updated last week
- automatically quant GGUF models☆203Updated last week
- Wraps any OpenAI API interface as Responses with MCPs support so it supports Codex. Adding any missing stateful features. Ollama and Vllm…☆92Updated 3 months ago
- Code execution utilities for Open WebUI & Ollama☆297Updated 10 months ago
- Link you Ollama models to LM-Studio☆143Updated last year
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated 2 weeks ago
- Nginx proxy server in a Docker container to Authenticate & Proxy requests to Ollama from Public Internet via Cloudflare Tunnel☆141Updated 3 weeks ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆80Updated last year
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆100Updated last month
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆279Updated last month
- Export and Backup Ollama models into GGUF and ModelFile☆82Updated last year
- A simple to use Ollama autocompletion engine with options exposed and streaming functionality☆136Updated 5 months ago
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆162Updated 2 months ago
- Something similar to Apple Intelligence?☆61Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated last year
- An OpenAI API compatible moderations server for checking whether text is potentially harmful.☆10Updated last year
- Web UI for ExLlamaV2☆512Updated 7 months ago