antirez / gguf-toolsLinks
GGUF implementation in C as a library and a tools CLI program
☆273Updated 5 months ago
Alternatives and similar repositories for gguf-tools
Users that are interested in gguf-tools are comparing it to the libraries listed below
Sorting:
- An implementation of bucketMul LLM inference☆217Updated 11 months ago
- LLM-based code completion engine☆194Updated 5 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆156Updated last month
- Inference of Mamba models in pure C☆187Updated last year
- LLaVA server (llama.cpp).☆180Updated last year
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆782Updated this week
- Python bindings for ggml☆141Updated 9 months ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆128Updated 11 months ago
- ggml implementation of BERT☆493Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆567Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆118Updated 9 months ago
- throwaway GPT inference☆140Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆30Updated last year
- ☆187Updated 9 months ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- C++ implementation for 💫StarCoder☆453Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆214Updated last year
- a small code base for training large models☆301Updated last month
- ☆57Updated 10 months ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆137Updated 11 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- ☆163Updated last year
- A super simple web interface to perform blind tests on LLM outputs.☆28Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆126Updated 2 months ago
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆377Updated 2 weeks ago
- Fast parallel LLM inference for MLX☆193Updated 11 months ago
- ☆248Updated last year
- A fork of llama3.c used to do some R&D on inferencing☆22Updated 6 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year