google / minjaLinks
A minimalistic C++ Jinja templating engine for LLM chat templates
☆164Updated this week
Alternatives and similar repositories for minja
Users that are interested in minja are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆190Updated last year
- GGUF implementation in C as a library and a tools CLI program☆280Updated 7 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆57Updated last year
- Simple high-throughput inference library☆125Updated 2 months ago
- Python bindings for ggml☆143Updated 11 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 5 months ago
- Thin wrapper around GGML to make life easier☆40Updated last month
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- LLM training in simple, raw C/CUDA☆103Updated last year
- C API for MLX☆121Updated 3 weeks ago
- 1.58 Bit LLM on Apple Silicon using MLX☆217Updated last year
- High-Performance SGEMM on CUDA devices☆98Updated 6 months ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆145Updated 7 months ago
- ☆216Updated 6 months ago
- xet client tech, used in huggingface_hub☆157Updated this week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 6 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆30Updated last year
- LLM-based code completion engine☆194Updated 6 months ago
- Experiments with BitNet inference on CPU☆54Updated last year
- SIMD quantization kernels☆78Updated this week
- First token cutoff sampling inference example☆30Updated last year
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆88Updated last year
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated 4 months ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆142Updated last week
- llama.cpp to PyTorch Converter☆34Updated last year
- TTS support with GGML☆143Updated 2 weeks ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆291Updated last year
- ☆392Updated this week