mlecauchois / micrograd-cudaLinks
☆248Updated last year
Alternatives and similar repositories for micrograd-cuda
Users that are interested in micrograd-cuda are comparing it to the libraries listed below
Sorting:
- throwaway GPT inference☆140Updated last year
- DiscoGrad - automatically differentiate across conditional branches in C++ programs☆208Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆124Updated 6 months ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆254Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆624Updated 7 months ago
- R.L. methods and techniques.☆199Updated last year
- ☆47Updated 7 months ago
- A BERT that you can train on a (gaming) laptop.☆207Updated 2 years ago
- Algebraic enhancements for GEMM & AI accelerators☆281Updated 8 months ago
- A pure NumPy implementation of Mamba.☆223Updated last year
- Richard is gaining power☆198Updated 4 months ago
- Felafax is building AI infra for non-NVIDIA GPUs☆568Updated 9 months ago
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆375Updated last year
- ☆198Updated 6 months ago
- A character-level language diffusion model trained on Tiny Shakespeare☆330Updated this week
- An implementation of bucketMul LLM inference☆223Updated last year
- Tensor library & inference framework for machine learning☆113Updated last month
- ☆453Updated 3 weeks ago
- ☆172Updated 4 months ago
- Multi-Threaded FP32 Matrix Multiplication on x86 CPUs☆367Updated 6 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆217Updated 11 months ago
- A reimplementation of Stable Diffusion 3.5 in pure PyTorch☆684Updated 5 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆286Updated 2 months ago
- Revealing example of self-attention, the building block of transformer AI models☆130Updated 2 years ago
- Open weights language model from Google DeepMind, based on Griffin.☆653Updated 5 months ago
- Docker-based inference engine for AMD GPUs☆230Updated last year
- Autograd to GPT-2 completely from scratch☆125Updated 3 months ago
- Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.☆610Updated 8 months ago
- Grandmaster-Level Chess Without Search☆593Updated 10 months ago
- Heirarchical Navigable Small Worlds☆101Updated 3 months ago