mlecauchois / micrograd-cudaLinks
☆248Updated last year
Alternatives and similar repositories for micrograd-cuda
Users that are interested in micrograd-cuda are comparing it to the libraries listed below
Sorting:
- DiscoGrad - automatically differentiate across conditional branches in C++ programs☆203Updated 10 months ago
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆126Updated 2 months ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆252Updated last year
- R.L. methods and techniques.☆196Updated 7 months ago
- Algebraic enhancements for GEMM & AI accelerators☆277Updated 4 months ago
- throwaway GPT inference☆140Updated last year
- A reimplementation of Stable Diffusion 3.5 in pure PyTorch☆637Updated last month
- A pure NumPy implementation of Mamba.☆224Updated last year
- A BERT that you can train on a (gaming) laptop.☆209Updated last year
- Richard is gaining power☆192Updated 3 weeks ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆618Updated 3 months ago
- Official codebase for the paper "Beyond A* Better Planning with Transformers via Search Dynamics Bootstrapping".☆369Updated last year
- An implementation of bucketMul LLM inference☆220Updated last year
- ☆363Updated this week
- Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator☆211Updated last year
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆285Updated 2 weeks ago
- Felafax is building AI infra for non-NVIDIA GPUs☆566Updated 5 months ago
- Tensor library & inference framework for machine learning☆101Updated last week
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆206Updated 7 months ago
- ☆196Updated 2 months ago
- ☆47Updated 3 months ago
- Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.☆594Updated 4 months ago
- Heirarchical Navigable Small Worlds☆97Updated 3 months ago
- Docker-based inference engine for AMD GPUs☆231Updated 9 months ago
- Grandmaster-Level Chess Without Search☆582Updated 6 months ago
- Visualize the intermediate output of Mistral 7B☆366Updated 5 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆644Updated last month
- Run and explore Llama models locally with minimal dependencies on CPU☆191Updated 9 months ago
- Autograd to GPT-2 completely from scratch☆114Updated 2 months ago
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆100Updated 9 months ago