kimbochen / md-blogsLinks
A blog where I write about research papers and blog posts I read.
☆12Updated last year
Alternatives and similar repositories for md-blogs
Users that are interested in md-blogs are comparing it to the libraries listed below
Sorting:
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆155Updated 2 years ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆71Updated this week
- Solve puzzles. Learn CUDA.☆63Updated 2 years ago
- seqax = sequence modeling + JAX☆170Updated 6 months ago
- train with kittens!☆63Updated last year
- ☆47Updated 2 years ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆115Updated last month
- ☆92Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Updated 8 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- ☆178Updated 2 years ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated last year
- Custom kernels in Triton language for accelerating LLMs☆27Updated last year
- ☆91Updated last year
- MoE training for Me and You and maybe other people☆335Updated last month
- Simple Transformer in Jax☆142Updated last year
- ☆27Updated last year
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- A really tiny autograd engine☆99Updated 8 months ago
- Personal solutions to the Triton Puzzles☆20Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆137Updated last year
- ☆291Updated last year
- A puzzle to learn about prompting☆135Updated 2 years ago
- nanoGPT-like codebase for LLM training☆113Updated 3 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 8 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆72Updated this week