google / minjaLinks
A minimalistic C++ Jinja templating engine for LLM chat templates
☆153Updated 3 weeks ago
Alternatives and similar repositories for minja
Users that are interested in minja are comparing it to the libraries listed below
Sorting:
- Simple high-throughput inference library☆115Updated 3 weeks ago
- GGUF implementation in C as a library and a tools CLI program☆270Updated 4 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year
- Inference of Mamba models in pure C☆186Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- Python bindings for ggml☆141Updated 9 months ago
- ☆210Updated 4 months ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- C API for MLX☆109Updated last month
- High-Performance SGEMM on CUDA devices☆94Updated 4 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆212Updated last year
- LLM training in simple, raw C/CUDA☆99Updated last year
- Thin wrapper around GGML to make life easier☆34Updated this week
- vLLM adapter for a TGIS-compatible gRPC server.☆30Updated this week
- SIMD quantization kernels☆70Updated this week
- Learning about CUDA by writing PTX code.☆131Updated last year
- Lightweight Llama 3 8B Inference Engine in CUDA C☆46Updated 2 months ago
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆364Updated 4 months ago
- asynchronous/distributed speculative evaluation for llama3☆38Updated 9 months ago
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆82Updated 10 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆30Updated last year
- MLX support for the Open Neural Network Exchange (ONNX)☆51Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆117Updated this week
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆127Updated 10 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆88Updated 2 weeks ago
- ☆24Updated 8 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆67Updated 2 months ago
- xet client tech, used in huggingface_hub☆107Updated this week
- Samples of good AI generated CUDA kernels☆65Updated this week