kimbochen / md-blogsLinks
A blog where I write about research papers and blog posts I read.
☆12Updated 8 months ago
Alternatives and similar repositories for md-blogs
Users that are interested in md-blogs are comparing it to the libraries listed below
Sorting:
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆137Updated last year
- Solve puzzles. Learn CUDA.☆64Updated last year
- seqax = sequence modeling + JAX☆165Updated 3 weeks ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆50Updated this week
- ☆83Updated last year
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 9 months ago
- Simple Transformer in Jax☆138Updated last year
- A puzzle to learn about prompting☆132Updated 2 years ago
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- Learn CUDA with PyTorch☆35Updated 3 weeks ago
- train with kittens!☆62Updated 9 months ago
- ☆21Updated last year
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 10 months ago
- PyTorch Single Controller☆348Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 4 months ago
- ☆88Updated last year
- ☆27Updated last year
- ☆45Updated last year
- ☆162Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆189Updated 2 months ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- Functional local implementations of main model parallelism approaches☆96Updated 2 years ago
- Minimal but scalable implementation of large language models in JAX☆35Updated 3 weeks ago
- Mixed precision training from scratch with Tensors and CUDA☆24Updated last year
- ☆27Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆83Updated 3 weeks ago