geohot / dumbrl
Can RL solve simple problems?
☆54Updated last year
Alternatives and similar repositories for dumbrl:
Users that are interested in dumbrl are comparing it to the libraries listed below
- An implementation of delta-iris in tinygrad☆72Updated 8 months ago
- Noob Lessons from Stream about how GPUs work☆111Updated this week
- parallelized hyperdimensional tictactoe☆117Updated 8 months ago
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- ctypes wrappers for HIP, CUDA, and OpenCL☆129Updated 9 months ago
- ☆27Updated 9 months ago
- Tensor library with autograd using only Rust's standard library☆67Updated 9 months ago
- Scripts and environment for the tinybox☆93Updated last year
- Generative cellular automaton-like learning environments for RL.☆19Updated 2 months ago
- A really tiny autograd engine☆92Updated last year
- Exploration into the Firefly algorithm in Pytorch☆38Updated 2 months ago
- Because it's there.☆16Updated 7 months ago
- ☆32Updated 3 months ago
- tiny corporation website☆7Updated this week
- comma body does a loop around the office☆26Updated last year
- Simple Transformer in Jax☆136Updated 10 months ago
- Extensive introductory writeup on Zig language functionalities☆10Updated 9 months ago
- Solve puzzles. Learn CUDA.☆63Updated last year
- Minimal Implimentation of VCRec (2024) for collapse provention.☆16Updated 2 months ago
- ☆60Updated 3 years ago
- ☆82Updated this week
- ☆20Updated last year
- A synthetic story narration dataset to study small audio LMs.☆32Updated last year
- LLM training in simple, raw C/CUDA☆18Updated 11 months ago
- nice and effective super simple calorie counter web app☆95Updated 10 months ago
- ☆71Updated this week
- seqax = sequence modeling + JAX☆154Updated 2 weeks ago
- Jax like function transformation engine but micro, microjax☆30Updated 6 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆41Updated this week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆82Updated last year