antirez / gguf-tools
GGUF implementation in C as a library and a tools CLI program
☆268Updated 3 months ago
Alternatives and similar repositories for gguf-tools:
Users that are interested in gguf-tools are comparing it to the libraries listed below
- Inference of Mamba models in pure C☆187Updated last year
- throwaway GPT inference☆138Updated 10 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆132Updated last week
- An implementation of bucketMul LLM inference☆216Updated 9 months ago
- LLM-based code completion engine☆184Updated 3 months ago
- ggml implementation of BERT☆488Updated last year
- Python bindings for ggml☆140Updated 7 months ago
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆567Updated last year
- FlashAttention (Metal Port)☆482Updated 7 months ago
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 7 months ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆125Updated 9 months ago
- LLaVA server (llama.cpp).☆180Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆294Updated last month
- Tiny inference-only implementation of LLaMA☆93Updated last year
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆744Updated this week
- 1.58 Bit LLM on Apple Silicon using MLX☆200Updated 11 months ago
- Fast parallel LLM inference for MLX☆184Updated 9 months ago
- WebGPU LLM inference tuned by hand☆149Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆53Updated last year
- ☆208Updated 3 months ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆310Updated last year
- asynchronous/distributed speculative evaluation for llama3☆39Updated 8 months ago
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- C API for MLX☆107Updated this week
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆135Updated 9 months ago
- Run GGML models with Kubernetes.☆173Updated last year
- CLIP inference in plain C/C++ with no extra dependencies☆494Updated 8 months ago
- ☆252Updated last year
- ☆56Updated last week