antirez / gguf-toolsLinks
GGUF implementation in C as a library and a tools CLI program
☆297Updated 4 months ago
Alternatives and similar repositories for gguf-tools
Users that are interested in gguf-tools are comparing it to the libraries listed below
Sorting:
- A minimalistic C++ Jinja templating engine for LLM chat templates☆202Updated 3 months ago
- LLM-based code completion engine☆190Updated 11 months ago
- Python bindings for ggml☆146Updated last year
- throwaway GPT inference☆141Updated last year
- Inference of Mamba models in pure C☆196Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆568Updated 2 years ago
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- ggml implementation of BERT☆499Updated last year
- CLIP inference in plain C/C++ with no extra dependencies☆547Updated 6 months ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆237Updated last year
- C API for MLX☆159Updated last week
- Mistral7B playing DOOM☆138Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆249Updated last year
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆154Updated 6 months ago
- A super simple web interface to perform blind tests on LLM outputs.☆29Updated last year
- a small code base for training large models☆318Updated 8 months ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated 2 years ago
- ☆62Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- Run GGML models with Kubernetes.☆175Updated 2 years ago
- C++ implementation for 💫StarCoder☆459Updated 2 years ago
- ggml implementation of embedding models including SentenceTransformer and BGE☆63Updated 2 years ago
- LLM-powered lossless compression tool☆298Updated 2 weeks ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 11 months ago
- ☆190Updated last year
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆141Updated last year