thu-ml / low-bit-optimizersLinks
Low-bit optimizers for PyTorch
☆138Updated 2 years ago
Alternatives and similar repositories for low-bit-optimizers
Users that are interested in low-bit-optimizers are comparing it to the libraries listed below
Sorting:
- ☆157Updated 2 years ago
- ☆131Updated 8 months ago
- ☆235Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- ☆158Updated 11 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Updated 6 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆233Updated 7 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Updated last year
- ☆128Updated 2 years ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆340Updated 11 months ago
- ☆150Updated 2 years ago
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆112Updated 2 years ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆220Updated 2 years ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆167Updated last year
- QuIP quantization☆61Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆123Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆106Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆177Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆105Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆263Updated 4 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆85Updated 2 years ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated last year
- Reorder-based post-training quantization for large language model☆199Updated 2 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- ☆61Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆117Updated 2 weeks ago