rasbt / pytorch-memory-optim
This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog post.
☆91Updated last year
Alternatives and similar repositories for pytorch-memory-optim:
Users that are interested in pytorch-memory-optim are comparing it to the libraries listed below
- ☆76Updated 9 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆87Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆123Updated last month
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆68Updated 10 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆59Updated 2 months ago
- ML/DL Math and Method notes☆60Updated last year
- ☆153Updated last year
- ☆46Updated 5 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆49Updated this week
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- ☆47Updated 7 months ago
- Train, tune, and infer Bamba model☆88Updated 3 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆90Updated 9 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 3 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆106Updated 6 months ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆120Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆104Updated 4 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 9 months ago
- experiments with inference on llama☆104Updated 10 months ago
- ☆79Updated last year
- Prune transformer layers☆68Updated 10 months ago
- Various transformers for FSDP research☆37Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- ☆133Updated last year
- ☆28Updated 5 months ago
- Load compute kernels from the Hub☆115Updated last week
- ☆92Updated last year
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆84Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆93Updated last year