KerfuffleV2 / rusty-ggmlLinks
GGML bindings that aim to be idiomatic Rust rather than directly corresponding to the C/C++ interface
☆19Updated last year
Alternatives and similar repositories for rusty-ggml
Users that are interested in rusty-ggml are comparing it to the libraries listed below
Sorting:
- Bleeding edge low level Rust binding for GGML☆16Updated last year
- A relatively basic implementation of RWKV in Rust written by someone with very little math and ML knowledge. Supports 32, 8 and 4 bit eva…☆93Updated last year
- 8-bit floating point types for Rust☆48Updated 2 weeks ago
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆40Updated last year
- ☆58Updated 2 years ago
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆108Updated 2 years ago
- A neural network inference library, written in Rust.☆63Updated last year
- Automatic differentiation in Rust with WGPU support☆23Updated 3 years ago
- Rust library for scheduling, managing resources, and running DAGs 🌙☆33Updated 6 months ago
- ☆20Updated 10 months ago
- Exploration of GPU computing using WebGPU☆27Updated 4 years ago
- ☆90Updated 6 months ago
- LLaMA from First Principles☆51Updated 2 years ago
- ☆32Updated 2 years ago
- ☆27Updated last year
- Tiny Autograd engine written in Rust☆58Updated 11 months ago
- ☆23Updated 3 months ago
- 🧮 alphatensor matrix breakthrough algorithms + simd + rust.☆61Updated 2 years ago
- Andrej Karpathy's Let's build GPT: from scratch video & notebook implemented in Rust + candle☆73Updated last year
- A minimal OpenCL, CUDA, Vulkan and host CPU array manipulation engine / framework.☆74Updated 3 weeks ago
- Tensor library for machine learning☆27Updated this week
- Low rank adaptation (LoRA) for Candle.☆152Updated 3 months ago
- A diffusers API in Burn (Rust)☆21Updated last year
- A fun, hackable, GPU-accelerated, neural network library in Rust, written by an idiot☆133Updated last year
- Half-precision floating point types f16 and bf16 for Rust.☆258Updated 2 months ago
- High-level, optionally asynchronous Rust bindings to llama.cpp☆226Updated last year
- Matrix implementation using custos☆12Updated last year
- Inference Llama 2 in one file of pure Rust 🦀☆233Updated last year
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆40Updated 8 months ago
- A collection of optimisers for use with candle☆37Updated last week