Thireus / GGUF-Tool-SuiteLinks
Input your VRAM and RAM and the toolchain will produce a GGUF model tuned to your system within seconds — flexible model sizing and lowest achievable perplexity for advanced users seeking precise and automated GGUF dynamic quant production.
☆62Updated last week
Alternatives and similar repositories for GGUF-Tool-Suite
Users that are interested in GGUF-Tool-Suite are comparing it to the libraries listed below
Sorting:
- llama.cpp fork with additional SOTA quants and improved performance☆34Updated this week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆81Updated last week
- ☆104Updated 2 months ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆152Updated this week
- automatically quant GGUF models☆214Updated last week
- ☆51Updated 8 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated 2 weeks ago
- ☆84Updated 3 weeks ago
- ☆124Updated 11 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆43Updated this week
- Easily view and modify JSON datasets for large language models☆83Updated 5 months ago
- ☆48Updated 3 weeks ago
- Sparse Inferencing for transformer based LLMs☆201Updated 2 months ago
- My personal fork of koboldcpp where I hack in experimental samplers.☆46Updated last year
- Lightweight C inference for Qwen3 GGUF. Multiturn prefix caching & batch processing.☆17Updated 2 months ago
- Privacy-first agentic framework with powerful reasoning & task automation capabilities. Natively distributed and fully ISO 27XXX complian…☆66Updated 7 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- KoboldCpp Smart Launcher with GPU Layer and Tensor Override Tuning☆29Updated 5 months ago
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- Llama.cpp runner/swapper and proxy that emulates LMStudio / Ollama backends☆48Updated 2 months ago
- A local front-end for open-weight LLMs with memory, RAG, TTS/STT, Elo ratings, and dynamic research tools. Built with React and FastAPI.☆37Updated 2 months ago
- Run Orpheus 3B Locally with Gradio UI, Standalone App☆21Updated 7 months ago
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆140Updated 3 months ago
- Convert downloaded Ollama models back into their GGUF equivalent format☆61Updated 10 months ago
- Generate a llama-quantize command to copy the quantization parameters of any GGUF☆24Updated 2 months ago
- SLOP Detector and analyzer based on dictionary for shareGPT JSON and text☆77Updated 11 months ago
- InferX: Inference as a Service Platform☆137Updated this week
- ☆135Updated 5 months ago
- This extension enhances the capabilities of textgen-webui by integrating advanced vision models, allowing users to have contextualized co…☆57Updated last year
- Make abliterated models with transformers, easy and fast☆90Updated 6 months ago