KerfuffleV2 / smolrsrwkv
A relatively basic implementation of RWKV in Rust written by someone with very little math and ML knowledge. Supports 32, 8 and 4 bit evaluation. It can also directly load PyTorch RWKV models.
☆93Updated last year
Alternatives and similar repositories for smolrsrwkv:
Users that are interested in smolrsrwkv are comparing it to the libraries listed below
- ☆32Updated last year
- GGML bindings that aim to be idiomatic Rust rather than directly corresponding to the C/C++ interface☆19Updated last year
- Bleeding edge low level Rust binding for GGML☆16Updated 7 months ago
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆102Updated last year
- A highly customizable, full scale web backend for web-rwkv, built on axum with websocket protocol.☆26Updated 10 months ago
- Implementation of the RWKV language model in pure WebGPU/Rust.☆279Updated this week
- ☆57Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆78Updated last year
- ☆18Updated 4 months ago
- Inference of Mamba models in pure C☆183Updated 11 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆37Updated last year
- An unofficial implementation of BitNet☆11Updated 10 months ago
- High-level, optionally asynchronous Rust bindings to llama.cpp☆203Updated 8 months ago
- ☆25Updated last year
- A collection of LLM token samplers in Rust☆17Updated last year
- A Fish Speech implementation in Rust, with Candle.rs☆68Updated 3 weeks ago
- Rust implementation of Huggingface transformers pipelines using onnxruntime backend with bindings to C# and C.☆36Updated last year
- RWKV models and examples powered by candle.☆18Updated 6 months ago
- tinygrad port of the RWKV large language model.☆44Updated 8 months ago
- Inference Llama 2 in one file of pure Rust 🦀☆232Updated last year
- Low rank adaptation (LoRA) for Candle.☆141Updated 5 months ago
- ☆125Updated 9 months ago
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆35Updated 3 months ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆87Updated 11 months ago
- Rust+OpenCL+AVX2 implementation of LLaMA inference code☆544Updated last year
- 8-bit floating point types for Rust☆44Updated 2 weeks ago
- Experimental compiler for deep learning models☆26Updated last month
- LLaMA from First Principles☆51Updated last year
- ☆40Updated last year
- Work in progress rust bindings to ggml☆12Updated last year