antirez / gguf-toolsLinks
GGUF implementation in C as a library and a tools CLI program
☆291Updated last month
Alternatives and similar repositories for gguf-tools
Users that are interested in gguf-tools are comparing it to the libraries listed below
Sorting:
- LLM-based code completion engine☆190Updated 9 months ago
- Inference of Mamba models in pure C☆192Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆190Updated last month
- Python bindings for ggml☆146Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- throwaway GPT inference☆140Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- ggml implementation of BERT☆494Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆568Updated 2 years ago
- C API for MLX☆144Updated last month
- CLIP inference in plain C/C++ with no extra dependencies☆523Updated 4 months ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- ☆62Updated last year
- Run GGML models with Kubernetes.☆173Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- ☆443Updated 2 months ago
- WebGPU LLM inference tuned by hand☆150Updated 2 years ago
- FlashAttention (Metal Port)☆545Updated last year
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆140Updated last year
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- A super simple web interface to perform blind tests on LLM outputs.☆28Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆224Updated last year
- Mistral7B playing DOOM☆138Updated last year
- ggml implementation of embedding models including SentenceTransformer and BGE☆59Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- a small code base for training large models☆310Updated 5 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆30Updated last year
- Fast parallel LLM inference for MLX☆223Updated last year
- Transformer GPU VRAM estimator☆67Updated last year