Mathemmagician / rustygrad
Tiny Autograd engine written in Rust
☆54Updated 5 months ago
Alternatives and similar repositories for rustygrad:
Users that are interested in rustygrad are comparing it to the libraries listed below
- A Deep Learning and preprocessing framework in Rust with support for CPU and GPU.☆128Updated last year
- A neural network inference library, written in Rust.☆59Updated 7 months ago
- Rustic bindings for IREE☆18Updated last year
- 8-bit floating point types for Rust☆44Updated 2 weeks ago
- Structured outputs for LLMs☆36Updated 7 months ago
- Unofficial Rust bindings to Apple's mlx framework☆117Updated this week
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆102Updated last year
- RAI: Rust ML framework with composable transformations like JAX.☆84Updated 6 months ago
- Experimental compiler for deep learning models☆26Updated last month
- Inference Llama 2 in one file of pure Rust 🦀☆232Updated last year
- Andrej Karpathy's Let's build GPT: from scratch video & notebook implemented in Rust + candle☆68Updated 10 months ago
- Tensor library with autograd using only Rust's standard library☆65Updated 7 months ago
- ☆64Updated 11 months ago
- A curated collection of Rust projects related to neural networks, designed to complement "Are We Learning Yet."☆47Updated last week
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆38Updated 11 months ago
- LLaMA from First Principles☆51Updated last year
- ☆81Updated last month
- GGML bindings that aim to be idiomatic Rust rather than directly corresponding to the C/C++ interface☆19Updated last year
- ☆57Updated last year
- Scientific computing for Rhai.☆16Updated 2 weeks ago
- ☆18Updated 4 months ago
- Experimentation using the xla compiler from rust☆91Updated 5 months ago
- Fast convolutions library implemented completely in Rust. Minimal depedencies required, and especially no external C libraries.☆25Updated 2 years ago
- allms: One Rust Library to rule them aLLMs☆59Updated this week
- Run LLaMA inference on CPU, with Rust 🦀🚀🦙☆20Updated last month
- ☆21Updated 7 months ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆31Updated 9 months ago
- Low rank adaptation (LoRA) for Candle.☆141Updated 5 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆37Updated last year
- A collection of boosting algorithms written in Rust 🦀☆51Updated last week