Thireus / GGUF-Tool-SuiteLinks
Input your VRAM and RAM and the toolchain will produce a GGUF model tuned to your system within seconds — flexible model sizing and lowest achievable perplexity for advanced users seeking precise and automated GGUF dynamic quant production.
☆51Updated this week
Alternatives and similar repositories for GGUF-Tool-Suite
Users that are interested in GGUF-Tool-Suite are comparing it to the libraries listed below
Sorting:
- llama.cpp fork with additional SOTA quants and improved performance☆31Updated this week
- automatically quant GGUF models☆210Updated this week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆80Updated last week
- ☆102Updated last month
- ☆83Updated this week
- Lightweight C inference for Qwen3 GGUF. Multiturn prefix caching & batch processing.☆18Updated last month
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆42Updated last month
- NVIDIA Linux open GPU with P2P support☆59Updated 2 weeks ago
- ☆51Updated 7 months ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆147Updated this week
- win32 native frontend for llama-cli☆12Updated 11 months ago
- Generate a llama-quantize command to copy the quantization parameters of any GGUF☆24Updated 2 months ago
- Easily view and modify JSON datasets for large language models☆83Updated 4 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated last week
- SLOP Detector and analyzer based on dictionary for shareGPT JSON and text☆76Updated 11 months ago
- Run Orpheus 3B Locally with Gradio UI, Standalone App☆22Updated 6 months ago
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆130Updated 3 months ago
- InferX: Inference as a Service Platform☆136Updated this week
- ☆122Updated 11 months ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Updated 3 weeks ago
- Sparse Inferencing for transformer based LLMs☆201Updated 2 months ago
- KoboldCpp Smart Launcher with GPU Layer and Tensor Override Tuning☆28Updated 4 months ago
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- Simple node proxy for llama-server that enables MCP use☆13Updated 5 months ago
- A TTS model capable of generating ultra-realistic dialogue in one pass.☆31Updated 5 months ago
- A local front-end for open-weight LLMs with memory, RAG, TTS/STT, Elo ratings, and dynamic research tools. Built with React and FastAPI.☆38Updated 2 months ago
- Stable Diffusion and Flux in pure C/C++☆21Updated 3 weeks ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- Privacy-first agentic framework with powerful reasoning & task automation capabilities. Natively distributed and fully ISO 27XXX complian…☆66Updated 6 months ago
- LLM Ripper is a framework for component extraction (embeddings, attention heads, FFNs), activation capture, functional analysis, and adap…☆45Updated this week