antirez / gguf-toolsLinks
GGUF implementation in C as a library and a tools CLI program
☆296Updated 3 months ago
Alternatives and similar repositories for gguf-tools
Users that are interested in gguf-tools are comparing it to the libraries listed below
Sorting:
- A minimalistic C++ Jinja templating engine for LLM chat templates☆202Updated 3 months ago
- Inference of Mamba models in pure C☆195Updated last year
- LLM-based code completion engine☆190Updated 11 months ago
- Python bindings for ggml☆146Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- throwaway GPT inference☆141Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- ggml implementation of BERT☆499Updated last year
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- C API for MLX☆157Updated last week
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆569Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆230Updated last year
- ☆62Updated last year
- CLIP inference in plain C/C++ with no extra dependencies☆544Updated 6 months ago
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆348Updated 8 months ago
- FlashAttention (Metal Port)☆569Updated last year
- port of Andrjey Karpathy's llm.c to Mojo☆361Updated 4 months ago
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆150Updated 5 months ago
- Transformer GPU VRAM estimator☆67Updated last year
- First token cutoff sampling inference example☆31Updated last year
- Run GGML models with Kubernetes.☆175Updated 2 years ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated 2 years ago
- C++ implementation for 💫StarCoder☆457Updated 2 years ago
- LLM-powered lossless compression tool☆295Updated last year
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- ☆249Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆308Updated 2 years ago
- ☆461Updated last month