rasbt / pytorch-memory-optimLinks
This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog post.
☆92Updated last year
Alternatives and similar repositories for pytorch-memory-optim
Users that are interested in pytorch-memory-optim are comparing it to the libraries listed below
Sorting:
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆56Updated last week
- Fast, Modern, and Low Precision PyTorch Optimizers☆94Updated this week
- ☆78Updated 11 months ago
- experiments with inference on llama☆104Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆122Updated 10 months ago
- LoRA and DoRA from Scratch Implementations☆204Updated last year
- Various transformers for FSDP research☆37Updated 2 years ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- ☆47Updated 9 months ago
- Load compute kernels from the Hub☆191Updated this week
- ☆159Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆165Updated 4 months ago
- ☆40Updated last year
- ML/DL Math and Method notes☆61Updated last year
- ☆88Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆70Updated last week
- ☆68Updated 11 months ago
- ☆47Updated 7 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆70Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 6 months ago
- DPO, but faster 🚀☆43Updated 6 months ago
- An extension of the nanoGPT repository for training small MOE models.☆152Updated 3 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated 8 months ago
- ☆49Updated last year
- ☆193Updated 4 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated 8 months ago