mlecauchois / micrograd-cudaLinks
☆250Updated last year
Alternatives and similar repositories for micrograd-cuda
Users that are interested in micrograd-cuda are comparing it to the libraries listed below
Sorting:
- throwaway GPT inference☆141Updated last year
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆254Updated 2 years ago
- DiscoGrad - automatically differentiate across conditional branches in C++ programs☆209Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆125Updated 8 months ago
- Algebraic enhancements for GEMM & AI accelerators☆286Updated 10 months ago
- R.L. methods and techniques.☆199Updated this week
- A BERT that you can train on a (gaming) laptop.☆210Updated 2 years ago
- Tensor library & inference framework for machine learning☆118Updated 3 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆627Updated 9 months ago
- A reimplementation of Stable Diffusion 3.5 in pure PyTorch☆690Updated 7 months ago
- ☆199Updated 8 months ago
- Richard is gaining power☆199Updated 6 months ago
- Autograd to GPT-2 completely from scratch☆125Updated 5 months ago
- A pure NumPy implementation of Mamba.☆222Updated last year
- Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.☆617Updated 10 months ago
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆374Updated last year
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆218Updated last year
- ☆47Updated 9 months ago
- A tiny autograd engine with a Jax-like API☆74Updated 6 months ago
- ☆459Updated last month
- Docker-based inference engine for AMD GPUs☆231Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- Heirarchical Navigable Small Worlds☆101Updated 5 months ago
- Run and explore Llama models locally with minimal dependencies on CPU☆190Updated last year
- Multi-Threaded FP32 Matrix Multiplication on x86 CPUs☆374Updated 8 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆287Updated 4 months ago
- Grandmaster-Level Chess Without Search☆602Updated last year
- Proof of thought : LLM-based reasoning using Z3 theorem proving with multiple backend support (SMT2 and JSON DSL)☆364Updated 2 months ago
- Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator☆215Updated 2 years ago
- ☆255Updated 2 years ago