kimbochen / md-blogsLinks
A blog where I write about research papers and blog posts I read.
☆12Updated 7 months ago
Alternatives and similar repositories for md-blogs
Users that are interested in md-blogs are comparing it to the libraries listed below
Sorting:
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆134Updated last year
- ML/DL Math and Method notes☆61Updated last year
- ☆44Updated last year
- ☆27Updated 11 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 3 months ago
- A really tiny autograd engine☆94Updated last month
- Write a fast kernel and run it on Discord. See how you compare against the best!☆46Updated this week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆84Updated last year
- Experiment of using Tangent to autodiff triton☆79Updated last year
- Learn CUDA with PyTorch☆27Updated this week
- Solve puzzles. Learn CUDA.☆64Updated last year
- train with kittens!☆60Updated 8 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆56Updated last week
- ☆78Updated 11 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆77Updated 2 weeks ago
- seqax = sequence modeling + JAX☆162Updated 2 weeks ago
- ☆159Updated last year
- Experiments for efforts to train a new and improved t5☆77Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆139Updated this week
- PyTorch centric eager mode debugger☆47Updated 6 months ago
- ☆37Updated last year
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Updated 7 months ago
- A reading list of relevant papers and projects on foundation model annotation☆27Updated 4 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆61Updated last month
- ☆88Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 3 weeks ago
- ☆22Updated last year
- ☆20Updated last year
- Mixed precision training from scratch with Tensors and CUDA☆24Updated last year