google / minja
A minimalistic C++ Jinja templating engine for LLM chat templates
☆131Updated this week
Alternatives and similar repositories for minja:
Users that are interested in minja are comparing it to the libraries listed below
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- GGUF implementation in C as a library and a tools CLI program☆265Updated 3 months ago
- ☆205Updated 2 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆257Updated this week
- Inference of Mamba models in pure C☆187Updated last year
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated 3 weeks ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆20Updated last year
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated 11 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆30Updated last year
- C API for MLX☆105Updated last week
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆125Updated 8 months ago
- Python bindings for ggml☆140Updated 7 months ago
- LLM training in simple, raw C/CUDA☆92Updated 11 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆195Updated 11 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆25Updated last year
- High-Performance SGEMM on CUDA devices☆89Updated 2 months ago
- Heirarchical Navigable Small Worlds☆71Updated this week
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆93Updated last month
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆46Updated last month
- Experiments with BitNet inference on CPU☆53Updated last year
- First token cutoff sampling inference example☆29Updated last year
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 2 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated 8 months ago
- LLaVA server (llama.cpp).☆179Updated last year
- An implementation of bucketMul LLM inference☆216Updated 9 months ago
- GRDN.AI app for garden optimization☆70Updated last year
- Fast parallel LLM inference for MLX☆178Updated 9 months ago
- Inference Llama 2 in C++☆44Updated 11 months ago
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆282Updated 2 months ago
- Learning about CUDA by writing PTX code.☆125Updated last year