kimbochen / md-blogsLinks
A blog where I write about research papers and blog posts I read.
☆12Updated last year
Alternatives and similar repositories for md-blogs
Users that are interested in md-blogs are comparing it to the libraries listed below
Sorting:
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆154Updated 2 years ago
- Solve puzzles. Learn CUDA.☆63Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- ☆92Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 9 months ago
- train with kittens!☆63Updated last year
- seqax = sequence modeling + JAX☆169Updated 5 months ago
- MoE training for Me and You and maybe other people☆315Updated last week
- ☆47Updated last year
- ☆91Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 7 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆66Updated 3 weeks ago
- Simple Transformer in Jax☆140Updated last year
- ☆27Updated last year
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- Compiling useful links, papers, benchmarks, ideas, etc.☆46Updated 9 months ago
- ☆178Updated last year
- ☆13Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 7 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆112Updated last week
- Ship correct and fast LLM kernels to PyTorch☆130Updated this week
- A really tiny autograd engine☆98Updated 7 months ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- A puzzle to learn about prompting☆135Updated 2 years ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 3 months ago
- A set of Python scripts that makes your experience on TPU better☆54Updated 3 months ago
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆33Updated 10 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago