antirez / gguf-toolsLinks
GGUF implementation in C as a library and a tools CLI program
☆294Updated 2 months ago
Alternatives and similar repositories for gguf-tools
Users that are interested in gguf-tools are comparing it to the libraries listed below
Sorting:
- LLM-based code completion engine☆190Updated 9 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆195Updated last month
- Inference of Mamba models in pure C☆192Updated last year
- throwaway GPT inference☆140Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- Python bindings for ggml☆146Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆568Updated 2 years ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆140Updated 3 weeks ago
- CLIP inference in plain C/C++ with no extra dependencies☆531Updated 4 months ago
- ggml implementation of BERT☆496Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- WebGPU LLM inference tuned by hand☆150Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- ☆453Updated 3 weeks ago
- C API for MLX☆150Updated last month
- a small code base for training large models☆314Updated 6 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆31Updated last year
- Transformer GPU VRAM estimator☆66Updated last year
- Run GGML models with Kubernetes.☆174Updated last year
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆140Updated last year
- asynchronous/distributed speculative evaluation for llama3☆38Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆225Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- llama.cpp to PyTorch Converter☆34Updated last year
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆346Updated 6 months ago
- Heirarchical Navigable Small Worlds☆101Updated 3 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆145Updated 4 months ago
- A super simple web interface to perform blind tests on LLM outputs.☆29Updated last year