kimbochen / md-blogs
A blog where I write about research papers and blog posts I read.
☆11Updated 3 months ago
Alternatives and similar repositories for md-blogs:
Users that are interested in md-blogs are comparing it to the libraries listed below
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆123Updated last year
- seqax = sequence modeling + JAX☆145Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆30Updated this week
- Minimal but scalable implementation of large language models in JAX☆32Updated 4 months ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆101Updated last year
- ☆27Updated 7 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆99Updated 3 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- ☆86Updated last year
- ☆58Updated 3 years ago
- ☆42Updated last year
- Experiment of using Tangent to autodiff triton☆76Updated last year
- ☆75Updated 7 months ago
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated 8 months ago
- ML/DL Math and Method notes☆58Updated last year
- Solve puzzles. Learn CUDA.☆62Updated last year
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆171Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated last month
- PyTorch centric eager mode debugger☆46Updated 2 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆119Updated 6 months ago
- Learn CUDA with PyTorch☆18Updated last month
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆214Updated this week
- Official repository of Sparse ISO-FLOP Transformations for Maximizing Training Efficiency☆25Updated 7 months ago
- ☆145Updated last year
- Experiments for efforts to train a new and improved t5☆77Updated 10 months ago
- train with kittens!☆53Updated 4 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆17Updated 4 months ago
- ☆26Updated last month
- A toolkit for scaling law research ⚖☆47Updated last month