google / minjaLinks
A minimalistic C++ Jinja templating engine for LLM chat templates
☆203Updated 4 months ago
Alternatives and similar repositories for minja
Users that are interested in minja are comparing it to the libraries listed below
Sorting:
- Inference of Mamba and Mamba2 models in pure C☆196Updated 2 weeks ago
- GGUF implementation in C as a library and a tools CLI program☆301Updated 5 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆51Updated 11 months ago
- C API for MLX☆172Updated this week
- Python bindings for ggml☆147Updated last year
- LLM training in simple, raw C/CUDA☆112Updated last year
- LLM-based code completion engine☆190Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆306Updated last year
- Simple high-throughput inference library☆155Updated 8 months ago
- Thin wrapper around GGML to make life easier☆42Updated 3 months ago
- ☆219Updated last year
- asynchronous/distributed speculative evaluation for llama3☆39Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆242Updated last year
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆143Updated 3 months ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆53Updated 10 months ago
- xet client tech, used in huggingface_hub☆398Updated last week
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆417Updated last month
- MLX support for the Open Neural Network Exchange (ONNX)☆63Updated last year
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆157Updated 7 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆54Updated last year
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆153Updated last year
- vLLM adapter for a TGIS-compatible gRPC server.☆50Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆238Updated this week
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated 2 years ago
- Inference RWKV v7 in pure C.☆44Updated 3 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- Experiments with BitNet inference on CPU☆55Updated last year