akx / ollama-dlLinks
Download models from the Ollama library, without Ollama
☆115Updated last year
Alternatives and similar repositories for ollama-dl
Users that are interested in ollama-dl are comparing it to the libraries listed below
Sorting:
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 10 months ago
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆162Updated 7 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆220Updated 3 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆313Updated 3 months ago
- A platform to self-host AI on easy mode☆178Updated 2 weeks ago
- LLM inference in C/C++☆103Updated last week
- Code execution utilities for Open WebUI & Ollama☆309Updated last year
- LM inference server implementation based on *.cpp.☆293Updated last week
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆266Updated 9 months ago
- Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and…☆51Updated 6 months ago
- A proxy server for multiple ollama instances with Key security☆540Updated 3 weeks ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆119Updated last year
- Something similar to Apple Intelligence?☆61Updated last year
- ☆108Updated 3 months ago
- Link you Ollama models to LM-Studio☆146Updated last year
- Create Linux commands from natural language, in the shell.☆116Updated 3 months ago
- ☆209Updated 2 months ago
- Wraps any OpenAI API interface as Responses with MCPs support so it supports Codex. Adding any missing stateful features. Ollama and Vllm…☆137Updated last month
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆84Updated last week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- No-messing-around sh client for llama.cpp's server☆30Updated last year
- Nginx proxy server in a Docker container to Authenticate & Proxy requests to Ollama from Public Internet via Cloudflare Tunnel☆152Updated 3 months ago
- automatically quant GGUF models☆217Updated last month
- An OpenAI API compatible moderations server for checking whether text is potentially harmful.☆10Updated last year
- RetroChat is a powerful command-line interface for interacting with various AI language models. It provides a seamless experience for eng…☆84Updated 4 months ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆119Updated last week
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆99Updated 5 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆254Updated this week
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- Replace OpenAI with Llama.cpp Automagically.☆324Updated last year