thu-ml / low-bit-optimizersLinks
Low-bit optimizers for PyTorch
β129Updated last year
Alternatives and similar repositories for low-bit-optimizers
Users that are interested in low-bit-optimizers are comparing it to the libraries listed below
Sorting:
- π₯ A minimal training framework for scaling FLA modelsβ178Updated 2 weeks ago
- β130Updated 4 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"β123Updated last year
- β151Updated 2 years ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear atβ¦β101Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ210Updated last week
- [ICML'24] The official implementation of βRethinking Optimization and Architecture for Tiny Language Modelsββ121Updated 5 months ago
- β114Updated 3 weeks ago
- β223Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"β163Updated last year
- XAttention: Block Sparse Attention with Antidiagonal Scoringβ166Updated this week
- β45Updated last year
- PB-LLM: Partially Binarized Large Language Modelsβ152Updated last year
- β147Updated 2 years ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.β81Updated 6 months ago
- β126Updated last year
- β105Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Modelsβ133Updated last year
- Reorder-based post-training quantization for large language modelβ191Updated 2 years ago
- Efficient triton implementation of Native Sparse Attention.β168Updated last month
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Trainingβ210Updated 10 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"β117Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4β207Updated last year
- Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"β36Updated 3 weeks ago
- The official code for Dropping Backward Propagation (DropBP)β30Updated 7 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantizationβ137Updated last month
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)β106Updated 3 months ago
- QuIP quantizationβ54Updated last year
- Odysseus: Playground of LLM Sequence Parallelismβ70Updated last year
- Work in progress.β69Updated 2 weeks ago