mostlygeek / llama-swapView external linksLinks
Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc
☆2,374Updated this week
Alternatives and similar repositories for llama-swap
Users that are interested in llama-swap are comparing it to the libraries listed below
Sorting:
- llama.cpp fork with additional SOTA quants and improved performance☆1,605Updated this week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,129Updated this week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆88Feb 7, 2026Updated last week
- One command brings a complete pre-wired LLM stack with hundreds of services to explore.☆2,424Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,440Dec 9, 2025Updated 2 months ago
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆9,462Updated this week
- Large-scale LLM inference engine☆1,647Jan 21, 2026Updated 3 weeks ago
- Llama.cpp runner/swapper and proxy that emulates LMStudio / Ollama backends☆51Aug 21, 2025Updated 5 months ago
- Manifold is an experimental platform for enabling long horizon workflow automation using teams of AI assistants.☆479Feb 6, 2026Updated last week
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙 Alternative to projects like llm-d, Docker Model R…☆1,447Feb 6, 2026Updated last week
- Fast, flexible LLM inference☆6,508Updated this week
- Optimizing inference proxy for LLMs☆3,324Jan 28, 2026Updated 2 weeks ago
- VS Code extension for LLM-assisted code/text completion☆1,150Jan 18, 2026Updated 3 weeks ago
- Delivery infrastructure for agentic apps - Plano is an AI-native proxy and data plane that offloads plumbing work, so you stay focused on…☆5,077Updated this week
- LLM inference in C/C++☆94,823Updated this week
- Docker/podman container for llama.cpp/vllm/exllamav{2,3} orchestrated using llama-swap☆16Feb 6, 2026Updated last week
- ☆90Dec 9, 2025Updated 2 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆633Updated this week
- InferX: Inference as a Service Platform☆156Feb 7, 2026Updated last week
- LLM Frontend in a single html file☆694Dec 27, 2025Updated last month
- WilmerAI is one of the oldest LLM semantic routers. It uses multi-layer prompt routing and complex workflows to allow you to not only cre…☆802Jan 5, 2026Updated last month
- Speech-to-speech AI assistant with natural conversation flow, mid-speech interruption, vision capabilities and AI-initiated follow-ups. F…☆285Apr 14, 2025Updated 10 months ago
- tl/dw (Too Long, Didn't Watch): Your Personal Research Multi-Tool - a naive attempt at 'A Young Lady's Illustrated Primer' (Open Source N…☆1,250Updated this week
- ☆230May 7, 2025Updated 9 months ago
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆993Dec 17, 2025Updated last month
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165May 16, 2024Updated last year
- ☆51Oct 10, 2025Updated 4 months ago
- Web UI for ExLlamaV2☆513Feb 5, 2025Updated last year
- ☆2,935Updated this week
- RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for in…☆2,577Updated this week
- ☆178Aug 10, 2025Updated 6 months ago
- Easy to use interface for the Whisper model optimized for all GPUs!☆463Jan 13, 2026Updated last month
- The Fastest Way to Fine-Tune LLMs Locally☆333Dec 18, 2025Updated last month
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆51,922Updated this week
- Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https…☆2,154Updated this week
- Claraverse is a opesource privacy focused ecosystem to replace ChatGPT, Claude, N8N, ImageGen with your own hosted llm, keys and compute.…☆3,706Jan 27, 2026Updated 2 weeks ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆282Jan 6, 2026Updated last month
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,822Feb 4, 2026Updated last week
- Python bindings for llama.cpp☆9,971Aug 15, 2025Updated 5 months ago