okuvshynov / llama_duoLinks
asynchronous/distributed speculative evaluation for llama3
☆39Updated last year
Alternatives and similar repositories for llama_duo
Users that are interested in llama_duo are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆196Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆202Updated 3 months ago
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆270Updated 3 weeks ago
- The Quasi Quantum Assembly Programming Language☆36Updated last month
- High-Performance SGEMM on CUDA devices☆114Updated 11 months ago
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 10 months ago
- Custom PTX Instruction Benchmark☆137Updated 10 months ago
- GGUF implementation in C as a library and a tools CLI program☆297Updated 4 months ago
- LLM training in simple, raw C/CUDA☆108Updated last year
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆50Updated 2 years ago
- Thin wrapper around GGML to make life easier☆41Updated 2 months ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆141Updated 2 months ago
- Python bindings for ggml☆146Updated last year
- Lightweight Llama 3 8B Inference Engine in CUDA C☆53Updated 9 months ago
- Samples of good AI generated CUDA kernels☆99Updated 7 months ago
- C API for MLX☆158Updated 3 weeks ago
- Inference RWKV v7 in pure C.☆43Updated 2 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- Simple high-throughput inference library☆155Updated 7 months ago
- tiny code to access tenstorrent blackhole☆61Updated 7 months ago
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆153Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 11 months ago
- Experiments with BitNet inference on CPU☆55Updated last year
- ☆219Updated 11 months ago
- tinygrad port of the RWKV large language model.☆45Updated 10 months ago
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆183Updated last year
- GGUF parser in Python☆28Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆48Updated 4 months ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- RDNA3 emulator☆55Updated 8 months ago