nreHieW / r-nnLinks
Tensor library with autograd using only Rust's standard library
☆70Updated last year
Alternatives and similar repositories for r-nn
Users that are interested in r-nn are comparing it to the libraries listed below
Sorting:
- SIMD quantization kernels☆93Updated 3 months ago
- Simple Transformer in Jax☆141Updated last year
- Experimental compiler for deep learning models☆72Updated 3 months ago
- A really tiny autograd engine☆96Updated 7 months ago
- Learning about CUDA by writing PTX code.☆150Updated last year
- Rust Implementation of micrograd☆53Updated last year
- Learn GPU Programming in Mojo🔥 by Solving Puzzles☆266Updated last week
- ☆135Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 9 months ago
- Alex Krizhevsky's original code from Google Code☆197Updated 9 years ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 3 months ago
- peer-to-peer compute and intelligence network that enables decentralized AI development at scale☆135Updated last month
- parallelized hyperdimensional tictactoe☆126Updated last year
- An implementation of delta-iris in tinygrad☆72Updated last year
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆201Updated 2 years ago
- ☆28Updated last year
- Doing advent of code with CUDA and rust.☆205Updated last year
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆277Updated last year
- could we make an ml stack in 100,000 lines of code?☆46Updated last year
- Gradient descent is cool and all, but what if we could delete it?☆104Updated 4 months ago
- Following Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- port of Andrjey Karpathy's llm.c to Mojo☆361Updated 4 months ago
- ctypes wrappers for HIP, CUDA, and OpenCL☆130Updated last year
- noise_step: Training in 1.58b With No Gradient Memory☆220Updated last year
- MoE training for Me and You and maybe other people☆298Updated last week
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆63Updated last year
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆75Updated 7 months ago
- Because it's there.☆16Updated last year
- ☆109Updated last year