What would you do with 1000 H100s...
☆1,166Jan 10, 2024Updated 2 years ago
Alternatives and similar repositories for LLM-Training-Puzzles
Users that are interested in LLM-Training-Puzzles are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Puzzles for exploring transformers☆387May 4, 2023Updated 2 years ago
- ☆499Oct 18, 2024Updated last year
- A puzzle to learn about prompting☆136May 12, 2023Updated 2 years ago
- Solve puzzles. Improve your pytorch.☆3,993Jul 15, 2024Updated last year
- Puzzles for learning Triton☆2,348Mar 18, 2026Updated last week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Solve puzzles. Learn CUDA.☆12,007Sep 1, 2024Updated last year
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,119Aug 26, 2025Updated 7 months ago
- Annotated version of the Mamba paper☆499Feb 27, 2024Updated 2 years ago
- GPU programming related news and material links☆2,060Mar 8, 2026Updated 2 weeks ago
- A PyTorch native platform for training generative AI models☆5,162Mar 20, 2026Updated last week
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,187Aug 22, 2025Updated 7 months ago
- Machine Learning Engineering Open Book☆17,528Mar 16, 2026Updated last week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆836Mar 15, 2026Updated last week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Tile primitives for speedy kernels☆3,244Mar 17, 2026Updated last week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- A bibliography and survey of the papers surrounding o1☆1,213Nov 16, 2024Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆699Jan 26, 2026Updated 2 months ago
- Implementation of https://srush.github.io/annotated-s4☆514Jun 20, 2025Updated 9 months ago
- Efficient Triton Kernels for LLM Training☆6,242Updated this week
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- A simple, performant and scalable Jax LLM!☆2,182Updated this week
- ☆22Apr 22, 2024Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Development repository for the Triton language and compiler☆18,708Updated this week
- ☆92Jul 5, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,078Updated this week
- Ring attention implementation with flash attention☆998Sep 10, 2025Updated 6 months ago
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,752Jul 18, 2025Updated 8 months ago
- ☆4,110Jun 4, 2024Updated last year
- Fast and memory-efficient exact attention☆22,938Updated this week
- Material for gpu-mode lectures☆5,865Feb 1, 2026Updated last month
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,585Jan 28, 2026Updated last month
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- A tiny library for coding with large language models.☆1,233Jul 10, 2024Updated last year
- ☆571Jul 11, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- ☆310Updated this week
- Robust recipes to align language models with human and AI preferences☆5,535Sep 8, 2025Updated 6 months ago
- Helpful tools and examples for working with flex-attention☆1,161Feb 8, 2026Updated last month
- Implementation of a Transformer, but completely in Triton☆279Apr 5, 2022Updated 3 years ago