Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input a target size and the toolchain will create a GGUF recipe tuned to your hardware within seconds — flexible model sizing and lowest achievable perplexity/kld for GGUF enthusiasts seeking precise and automated dynamic quant production.
☆108Apr 15, 2026Updated this week
Alternatives and similar repositories for GGUF-Tool-Suite
Users that are interested in GGUF-Tool-Suite are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- llama.cpp fork with additional SOTA quants and improved performance☆22Updated this week
- Inference Llama 2 in one file of pure Haskell (A port of llama2.c from Andrej Karpathy)☆14Oct 17, 2025Updated 6 months ago
- A minimal CLI tool for piping anything into an LLM.☆21Jan 1, 2026Updated 3 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆767Updated this week
- ☆55Oct 10, 2025Updated 6 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Esobold - A fork of KoboldCPP with agent schenanigans and server side saving!☆25Updated this week
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆167Updated this week
- Lightweight C inference for Qwen3 GGUF. Multiturn prefix caching & batch processing.☆25Sep 1, 2025Updated 7 months ago
- A proxy that hosts multiple single-model runners such as LLama.cpp and vLLM☆13May 30, 2025Updated 10 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆2,095Updated this week
- A collection python tools used to create gguf files and upload to huggingface☆17Mar 28, 2026Updated 3 weeks ago
- A cross platform App that gives you the best UX to run models locally or remotely on your own hardware☆78Mar 22, 2026Updated 3 weeks ago
- ☆32Jul 20, 2024Updated last year
- Kubernetes operator for local LLM inference with llama.cpp, vLLM, and TGI - multi-GPU, autoscaling, air-gapped, production-ready☆48Apr 11, 2026Updated last week
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Loader extension for tabbyAPI in SillyTavern☆26Jun 30, 2025Updated 9 months ago
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13May 5, 2024Updated last year
- Official code for SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models (NeurIPS 2023)☆13Mar 4, 2024Updated 2 years ago
- ik_llama.cpp's Thireus fork with release builds for macOS/Windows/Ubuntu CPU, Vulkan and CUDA☆98Updated this week
- Extension for Forge-based UIs (Forge, reForge, etc) and ComfyUI to replace CFG with Negative Rejection Steering☆16Feb 14, 2026Updated 2 months ago
- ☆19Jul 4, 2025Updated 9 months ago
- world's stupidest moe llm in 103M parameters☆20Jul 18, 2025Updated 9 months ago
- cli tool to quantize gguf, gptq, awq, hqq and exl2 models☆79Dec 17, 2024Updated last year
- Port of Facebook's LLaMA model in C/C++☆13Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Make YouTube videos readable. Local-first Markdown summaries with Ollama, with cloud providers support.☆63Dec 28, 2025Updated 3 months ago
- ☆30Nov 5, 2024Updated last year
- ☆74Jun 20, 2025Updated 9 months ago
- Load and run Llama from safetensors files in C☆15Oct 24, 2024Updated last year
- Thank you LenAnderson I am yoinking this!☆25Apr 11, 2026Updated last week
- ☆67Aug 13, 2025Updated 8 months ago
- The easiest & fastest way to run LLMs in your home lab☆88Feb 23, 2026Updated last month
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆188Mar 23, 2026Updated 3 weeks ago
- automatically quant GGUF models☆223Dec 23, 2025Updated 3 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆22Oct 13, 2025Updated 6 months ago
- Firstpass generation with old SD model, Loras, embedding, etc☆32Jun 12, 2024Updated last year
- A web-app to explore topics using LLM (less typing and more clicks)☆68Mar 15, 2026Updated last month
- Desktop application for instant AI-powered text transformation. Translate, correct, summarize, and change the tone of any text, anywhere,…☆30Dec 29, 2025Updated 3 months ago
- ☆33Updated this week
- A dynamic multi-expert AI architecture running on a single consumer GPU (RTX 3060).☆36Dec 2, 2025Updated 4 months ago
- Sparse Inferencing for transformer based LLMs☆218Mar 25, 2026Updated 3 weeks ago