kimbochen / md-blogs
A blog where I write about research papers and blog posts I read.
☆12Updated 5 months ago
Alternatives and similar repositories for md-blogs
Users that are interested in md-blogs are comparing it to the libraries listed below
Sorting:
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆132Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆82Updated last year
- ☆43Updated last year
- Experiment of using Tangent to autodiff triton☆78Updated last year
- Learn CUDA with PyTorch☆20Updated 3 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated last month
- ML/DL Math and Method notes☆60Updated last year
- ☆79Updated 10 months ago
- train with kittens!☆57Updated 6 months ago
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated 10 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- A place to store reusable transformer components of my own creation or found on the interwebs☆55Updated this week
- ☆17Updated last year
- Simple Transformer in Jax☆136Updated 10 months ago
- seqax = sequence modeling + JAX☆155Updated last month
- ☆27Updated 10 months ago
- Solve puzzles. Learn CUDA.☆64Updated last year
- making the official triton tutorials actually comprehensible☆30Updated last month
- A really tiny autograd engine☆92Updated last year
- gzip Predicts Data-dependent Scaling Laws☆35Updated 11 months ago
- supporting pytorch FSDP for optimizers☆80Updated 5 months ago
- An introduction to LLM Sampling☆78Updated 5 months ago
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- ☆61Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆65Updated 3 weeks ago
- ☆38Updated 9 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆180Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆120Updated last week
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- ☆21Updated 2 months ago