thu-ml / low-bit-optimizersLinks
Low-bit optimizers for PyTorch
☆130Updated last year
Alternatives and similar repositories for low-bit-optimizers
Users that are interested in low-bit-optimizers are comparing it to the libraries listed below
Sorting:
- ☆123Updated 2 months ago
- ☆153Updated 2 years ago
- ☆223Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆323Updated 5 months ago
- 🔥 A minimal training framework for scaling FLA models☆220Updated last month
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- ☆137Updated 5 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆221Updated last month
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆149Updated last month
- The official implementation of the EMNLP 2023 paper LLM-FP4☆211Updated last year
- Efficient triton implementation of Native Sparse Attention.☆186Updated 2 months ago
- ☆199Updated 8 months ago
- ☆106Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆106Updated 2 years ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆128Updated 11 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆122Updated 6 months ago
- QuIP quantization☆54Updated last year
- ☆147Updated 2 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆72Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆119Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆167Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆230Updated 8 months ago
- Linear Attention Sequence Parallelism (LASP)☆85Updated last year
- ☆83Updated 6 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆91Updated 8 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆109Updated 4 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆212Updated 11 months ago