KerfuffleV2 / smolrsrwkvLinks
A relatively basic implementation of RWKV in Rust written by someone with very little math and ML knowledge. Supports 32, 8 and 4 bit evaluation. It can also directly load PyTorch RWKV models.
☆94Updated 2 years ago
Alternatives and similar repositories for smolrsrwkv
Users that are interested in smolrsrwkv are comparing it to the libraries listed below
Sorting:
- ☆32Updated 2 years ago
- Rust+OpenCL+AVX2 implementation of LLaMA inference code☆551Updated last year
- Bleeding edge low level Rust binding for GGML☆16Updated last year
- ☆58Updated 2 years ago
- High-level, optionally asynchronous Rust bindings to llama.cpp☆240Updated last year
- Inference Llama 2 in one file of pure Rust 🦀☆235Updated 2 years ago
- GGML bindings that aim to be idiomatic Rust rather than directly corresponding to the C/C++ interface☆19Updated 2 years ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated 2 years ago
- Implementation of the RWKV language model in pure WebGPU/Rust.☆335Updated last week
- ☆19Updated 2 weeks ago
- Inference of Mamba models in pure C☆196Updated last year
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- Implementing the BitNet model in Rust☆44Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!