google / minjaLinks
A minimalistic C++ Jinja templating engine for LLM chat templates
☆156Updated last month
Alternatives and similar repositories for minja
Users that are interested in minja are comparing it to the libraries listed below
Sorting:
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- GGUF implementation in C as a library and a tools CLI program☆273Updated 5 months ago
- Inference of Mamba models in pure C☆187Updated last year
- Simple high-throughput inference library☆119Updated last month
- ☆213Updated 5 months ago
- Python bindings for ggml☆141Updated 9 months ago
- C API for MLX☆115Updated 2 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆214Updated last year
- LLM training in simple, raw C/CUDA☆99Updated last year
- Thin wrapper around GGML to make life easier☆35Updated 3 weeks ago
- MLX support for the Open Neural Network Exchange (ONNX)☆52Updated last year
- asynchronous/distributed speculative evaluation for llama3☆39Updated 10 months ago
- First token cutoff sampling inference example☆30Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- llama.cpp to PyTorch Converter☆33Updated last year
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆128Updated 11 months ago
- SIMD quantization kernels☆71Updated last week
- Experiments with BitNet inference on CPU☆54Updated last year
- High-Performance SGEMM on CUDA devices☆95Updated 5 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆129Updated this week
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆49Updated 4 months ago
- LLM-based code completion engine☆194Updated 5 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- Inference RWKV v7 in pure C.☆33Updated 2 months ago
- Samples of good AI generated CUDA kernels☆83Updated 3 weeks ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- Inference server benchmarking tool☆74Updated 2 months ago
- Load compute kernels from the Hub☆191Updated this week
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆80Updated last month
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆86Updated 11 months ago