EurekaLabsAI / micrograd
The Autograd Engine
☆583Updated 6 months ago
Alternatives and similar repositories for micrograd:
Users that are interested in micrograd are comparing it to the libraries listed below
- The Multilayer Perceptron Language Model☆543Updated 7 months ago
- The Tensor (or Array)☆427Updated 7 months ago
- The n-gram Language Model☆1,402Updated 7 months ago
- UNet diffusion model in pure CUDA☆600Updated 8 months ago
- nanoGPT style version of Llama 3.1☆1,341Updated 7 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆948Updated 2 weeks ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆783Updated 2 weeks ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆306Updated last week
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆129Updated 4 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆269Updated 9 months ago
- Alex Krizhevsky's original code from Google Code☆190Updated 9 years ago
- An ML Systems Onboarding list☆734Updated last month
- nice and effective super simple calorie counter web app☆94Updated 9 months ago
- This repository is a curated collection of resources, tutorials, and practical examples designed to guide you through the journey of mast…☆302Updated last month
- (WIP) A small but powerful, homemade PyTorch from scratch.☆537Updated this week
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆145Updated 9 months ago
- Tutorials on tinygrad☆355Updated 3 weeks ago
- ☆136Updated 2 months ago
- NanoGPT (124M) in 3 minutes☆2,403Updated this week
- Learnings and programs related to CUDA☆328Updated last month
- Distributed Training Over-The-Internet☆888Updated 3 months ago
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆250Updated 4 months ago
- Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆170Updated 7 months ago
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆678Updated this week
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆855Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆307Updated 3 months ago
- Puzzles for learning Triton☆1,508Updated 4 months ago