antirez / gguf-toolsLinks
GGUF implementation in C as a library and a tools CLI program
☆295Updated 3 months ago
Alternatives and similar repositories for gguf-tools
Users that are interested in gguf-tools are comparing it to the libraries listed below
Sorting:
- A minimalistic C++ Jinja templating engine for LLM chat templates☆198Updated 2 months ago
- Inference of Mamba models in pure C☆194Updated last year
- LLM-based code completion engine☆190Updated 10 months ago
- throwaway GPT inference☆141Updated last year
- Python bindings for ggml☆146Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- CLIP inference in plain C/C++ with no extra dependencies☆540Updated 5 months ago
- ggml implementation of BERT☆500Updated last year
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆568Updated 2 years ago
- C API for MLX☆154Updated this week
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆141Updated last month
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆226Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- port of Andrjey Karpathy's llm.c to Mojo☆360Updated 4 months ago
- Run GGML models with Kubernetes.☆175Updated last year
- a small code base for training large models☆315Updated 7 months ago
- Mistral7B playing DOOM☆138Updated last year
- ☆62Updated last year
- Fast parallel LLM inference for MLX☆234Updated last year
- run embeddings in MLX☆96Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- FlashAttention (Metal Port)☆560Updated last year
- Inference Llama 2 in one file of pure Python☆423Updated 2 weeks ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆300Updated last year
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆148Updated 5 months ago
- ☆164Updated 4 months ago
- C++ implementation for 💫StarCoder☆457Updated 2 years ago