google / minjaLinks
A minimalistic C++ Jinja templating engine for LLM chat templates
☆197Updated last month
Alternatives and similar repositories for minja
Users that are interested in minja are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆192Updated last year
- GGUF implementation in C as a library and a tools CLI program☆295Updated 2 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- Python bindings for ggml☆146Updated last year
- C API for MLX☆150Updated last month
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 9 months ago
- Thin wrapper around GGML to make life easier☆40Updated 2 weeks ago
- Simple high-throughput inference library☆149Updated 6 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆225Updated last year
- asynchronous/distributed speculative evaluation for llama3☆38Updated last year
- LLM training in simple, raw C/CUDA☆108Updated last year
- ☆218Updated 9 months ago
- xet client tech, used in huggingface_hub☆322Updated this week
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆140Updated last month
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆152Updated 11 months ago
- High-Performance SGEMM on CUDA devices☆110Updated 10 months ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- Lightweight Llama 3 8B Inference Engine in CUDA C☆52Updated 7 months ago
- Fast and Furious AMD Kernels☆278Updated this week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆298Updated last year
- LLM-based code completion engine☆190Updated 9 months ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 9 months ago
- CPU inference for the DeepSeek family of large language models in C++☆313Updated last month
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆89Updated last year
- tiny code to access tenstorrent blackhole☆61Updated 5 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆210Updated this week
- This is the documentation repository for SGLang. It is auto-generated from https://github.com/sgl-project/sglang/tree/main/docs.☆89Updated this week
- MLX support for the Open Neural Network Exchange (ONNX)☆62Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- Learning about CUDA by writing PTX code.☆147Updated last year