okuvshynov / llama_duoLinks
asynchronous/distributed speculative evaluation for llama3
☆39Updated 10 months ago
Alternatives and similar repositories for llama_duo
Users that are interested in llama_duo are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆187Updated last year
- High-Performance SGEMM on CUDA devices☆95Updated 5 months ago
- Experiments with BitNet inference on CPU☆54Updated last year
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆49Updated last year
- Explore training for quantized models☆18Updated last week
- The Finite Field Assembly Programming Language☆37Updated last month
- Samples of good AI generated CUDA kernels☆83Updated 3 weeks ago
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆156Updated last month
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated 3 months ago
- Thin wrapper around GGML to make life easier☆35Updated 3 weeks ago
- LLM training in simple, raw C/CUDA☆99Updated last year
- Simple high-throughput inference library☆119Updated last month
- tinygrad port of the RWKV large language model.☆46Updated 3 months ago
- A fork of llama3.c used to do some R&D on inferencing☆22Updated 6 months ago
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆49Updated 4 months ago
- minimal C implementation of speculative decoding based on llama2.c☆23Updated 11 months ago
- ☆59Updated this week
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆180Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- Python bindings for ggml☆141Updated 9 months ago
- My CUDA solution to the 1BRC☆10Updated last year
- First token cutoff sampling inference example☆30Updated last year
- Inference RWKV v7 in pure C.☆34Updated 2 months ago
- SIMD quantization kernels☆71Updated last week
- ☆68Updated this week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆71Updated 4 months ago
- Enable moe for nanogpt.☆30Updated last year
- ☆81Updated 7 months ago
- The application performs real-time inference on audio from an ALSA capture device☆27Updated last week