KerfuffleV2 / smolrsrwkv
A relatively basic implementation of RWKV in Rust written by someone with very little math and ML knowledge. Supports 32, 8 and 4 bit evaluation. It can also directly load PyTorch RWKV models.
☆93Updated last year
Alternatives and similar repositories for smolrsrwkv:
Users that are interested in smolrsrwkv are comparing it to the libraries listed below
- ☆32Updated last year
- GGML bindings that aim to be idiomatic Rust rather than directly corresponding to the C/C++ interface☆19Updated last year
- ☆57Updated last year
- ☆25Updated last year
- Bleeding edge low level Rust binding for GGML☆16Updated 8 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆37Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- Rust implementation of Huggingface transformers pipelines using onnxruntime backend with bindings to C# and C.☆37Updated last year
- A collection of LLM token samplers in Rust☆17Updated last year
- RWKV models and examples powered by candle.☆18Updated 3 weeks ago
- tinygrad port of the RWKV large language model.☆44Updated 2 weeks ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- A highly customizable, full scale web backend for web-rwkv, built on axum with websocket protocol.☆26Updated 11 months ago
- Implementation of the RWKV language model in pure WebGPU/Rust.☆295Updated this week
- Rust+OpenCL+AVX2 implementation of LLaMA inference code☆544Updated last year
- Port of Andrej Karpathy's minbpe to Rust☆20Updated 10 months ago
- Implementing the BitNet model in Rust☆31Updated 11 months ago
- LLaMA from First Principles☆51Updated last year
- 8-bit floating point types for Rust☆46Updated last week
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆104Updated last year
- Inference of Mamba models in pure C☆186Updated last year
- A Fish Speech implementation in Rust, with Candle.rs☆75Updated last month
- Rust library for whisper.cpp compatible Mel spectrograms☆65Updated 3 weeks ago
- Low rank adaptation (LoRA) for Candle.☆144Updated 7 months ago
- ☆21Updated 8 months ago
- High-level, optionally asynchronous Rust bindings to llama.cpp☆214Updated 9 months ago
- ☆40Updated 2 years ago
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆37Updated 4 months ago
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆38Updated last year
- WebGPU LLM inference tuned by hand☆149Updated last year