kimbochen / md-blogsLinks
A blog where I write about research papers and blog posts I read.
☆12Updated 11 months ago
Alternatives and similar repositories for md-blogs
Users that are interested in md-blogs are comparing it to the libraries listed below
Sorting:
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆145Updated 2 years ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- ☆89Updated last year
- seqax = sequence modeling + JAX☆167Updated 2 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- ☆91Updated last year
- ☆46Updated last year
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- Solve puzzles. Learn CUDA.☆64Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆194Updated 4 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 7 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated last year
- ☆174Updated last year
- train with kittens!☆63Updated 11 months ago
- A puzzle to learn about prompting☆135Updated 2 years ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated this week
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆131Updated last month
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 11 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆103Updated 3 weeks ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆190Updated 2 years ago
- Learn CUDA with PyTorch☆87Updated 3 weeks ago
- ☆28Updated last year
- ML/DL Math and Method notes☆64Updated last year
- A set of Python scripts that makes your experience on TPU better☆54Updated last month
- Minimal but scalable implementation of large language models in JAX☆35Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆131Updated 10 months ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- ☆21Updated 9 months ago
- ☆77Updated this week