HazyResearch / aisys-building-blocks
Building blocks for foundation models.
☆435Updated last year
Alternatives and similar repositories for aisys-building-blocks:
Users that are interested in aisys-building-blocks are comparing it to the libraries listed below
- Puzzles for learning Triton☆1,300Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆505Updated 2 months ago
- What would you do with 1000 H100s...☆948Updated last year
- Minimalistic 4D-parallelism distributed training framework for education purpose☆644Updated this week
- A bibliography and survey of the papers surrounding o1☆1,042Updated 2 months ago
- Annotated version of the Mamba paper☆469Updated 10 months ago
- ☆267Updated 6 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆492Updated 2 months ago
- Helpful tools and examples for working with flex-attention☆583Updated this week
- Pipeline Parallelism for PyTorch☆736Updated 4 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆534Updated this week
- GPU programming related news and material links☆1,312Updated last week
- Puzzles for exploring transformers☆331Updated last year
- Cataloging released Triton kernels.☆155Updated last week
- ☆413Updated 2 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆253Updated last year
- Large Context Attention☆670Updated 5 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,669Updated this week
- An ML Systems Onboarding list☆647Updated 2 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆752Updated this week
- LLM KV cache compression made easy☆303Updated this week
- ☆201Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆1,386Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆215Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆219Updated 5 months ago
- ☆138Updated 11 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆114Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆681Updated 2 weeks ago
- ☆170Updated this week
- Applied AI experiments and examples for PyTorch☆211Updated this week