okuvshynov / llama_duoLinks
asynchronous/distributed speculative evaluation for llama3
☆39Updated 11 months ago
Alternatives and similar repositories for llama_duo
Users that are interested in llama_duo are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆188Updated last year
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆50Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆160Updated this week
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- A faithful clone of Karpathy's llama2.c (one file inference, zero dependency) but fully functional with LLaMA 3 8B base and instruct mode…☆128Updated 11 months ago
- Thin wrapper around GGML to make life easier☆36Updated 3 weeks ago
- A fork of llama3.c used to do some R&D on inferencing☆22Updated 6 months ago
- Samples of good AI generated CUDA kernels☆84Updated last month
- Lightweight Llama 3 8B Inference Engine in CUDA C☆47Updated 3 months ago
- The Finite Field Assembly Programming Language☆36Updated last month
- High-Performance SGEMM on CUDA devices☆97Updated 5 months ago
- Simple high-throughput inference library☆120Updated 2 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- LLM training in simple, raw C/CUDA☆99Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- tiny code to access tenstorrent blackhole☆55Updated last month
- GGUF implementation in C as a library and a tools CLI program☆274Updated 6 months ago
- C API for MLX☆117Updated this week
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆50Updated 4 months ago
- Python bindings for ggml☆142Updated 10 months ago
- First token cutoff sampling inference example☆30Updated last year
- Port of Microsoft's BioGPT in C/C++ using ggml☆87Updated last year
- GPT-2 inference engine written in Zig☆39Updated last year
- Editor with LLM generation tree exploration☆71Updated 5 months ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 5 months ago
- Web browser version of StarCoder.cpp☆45Updated last year
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 5 months ago
- ☆13Updated last year
- tinygrad port of the RWKV large language model.☆45Updated 4 months ago
- Port of Suno AI's Bark in C/C++ for fast inference☆52Updated last year