akx / ollama-dlLinks
Download models from the Ollama library, without Ollama
☆117Updated last year
Alternatives and similar repositories for ollama-dl
Users that are interested in ollama-dl are comparing it to the libraries listed below
Sorting:
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆166Updated 7 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆221Updated 4 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 10 months ago
- automatically quant GGUF models☆219Updated 2 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆319Updated 2 weeks ago
- A platform to self-host AI on easy mode☆181Updated 2 weeks ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆118Updated last year
- LM inference server implementation based on *.cpp.☆294Updated last month
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆267Updated 9 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆266Updated this week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆85Updated last week
- ☆109Updated 4 months ago
- LLM inference in C/C++☆104Updated last week
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆130Updated 2 years ago
- Distributed Inference for mlx LLm☆99Updated last year
- Easily view and modify JSON datasets for large language models☆85Updated 7 months ago
- Wraps any OpenAI API interface as Responses with MCPs support so it supports Codex. Adding any missing stateful features. Ollama and Vllm…☆139Updated last month
- ☆87Updated 2 weeks ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- ☆210Updated 3 months ago
- Something similar to Apple Intelligence?☆59Updated last year
- InferX: Inference as a Service Platform☆143Updated this week
- Export and Backup Ollama models into GGUF and ModelFile☆89Updated last year
- Prompt Jinja2 templates for LLMs☆35Updated 5 months ago
- Code execution utilities for Open WebUI & Ollama☆310Updated last year
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆86Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆605Updated last week
- Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and…☆51Updated 7 months ago
- Docs for GGUF quantization (unofficial)☆340Updated 5 months ago
- ☆94Updated last year