okuvshynov / llama_duoLinks
asynchronous/distributed speculative evaluation for llama3
☆39Updated last year
Alternatives and similar repositories for llama_duo
Users that are interested in llama_duo are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆189Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆164Updated this week
- High-Performance SGEMM on CUDA devices☆98Updated 6 months ago
- The Finite Field Assembly Programming Language☆36Updated 2 months ago
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆129Updated last year
- LLM training in simple, raw C/CUDA☆103Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆51Updated 5 months ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated 4 months ago
- Custom PTX Instruction Benchmark☆126Updated 5 months ago
- Samples of good AI generated CUDA kernels☆86Updated 2 months ago
- minimal C implementation of speculative decoding based on llama2.c☆24Updated last year
- GGUF implementation in C as a library and a tools CLI program☆277Updated 7 months ago
- Inference RWKV v7 in pure C.☆37Updated 2 weeks ago
- Thin wrapper around GGML to make life easier☆40Updated last month
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆144Updated 7 months ago
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆50Updated last year
- Python bindings for ggml☆143Updated 11 months ago
- Learning about CUDA by writing PTX code.☆133Updated last year
- GPT2 implementation in C++ using Ort☆26Updated 4 years ago
- Estimating hardware and cloud costs of LLMs and transformer projects☆18Updated last month
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆289Updated last year
- Nvidia Instruction Set Specification Generator☆286Updated last year
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆264Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 6 months ago
- ctypes wrappers for HIP, CUDA, and OpenCL☆130Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆181Updated last year
- llama.cpp to PyTorch Converter☆34Updated last year
- tiny code to access tenstorrent blackhole☆57Updated 2 months ago