rasbt / pytorch-memory-optimLinks
This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog post.
☆92Updated 2 years ago
Alternatives and similar repositories for pytorch-memory-optim
Users that are interested in pytorch-memory-optim are comparing it to the libraries listed below
Sorting:
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- ML/DL Math and Method notes☆66Updated 2 years ago
- ☆92Updated last year
- Various transformers for FSDP research☆38Updated 3 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆72Updated 3 weeks ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆126Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆170Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆60Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆89Updated 2 weeks ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆160Updated last year
- ☆82Updated last year
- ☆178Updated 2 years ago
- ☆48Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆196Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆86Updated 2 years ago
- A really tiny autograd engine☆99Updated 8 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Updated 8 months ago
- experiments with inference on llama☆103Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Utilities for Training Very Large Models☆58Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆53Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆83Updated 2 years ago