thu-ml / low-bit-optimizers
Low-bit optimizers for PyTorch
β128Updated last year
Alternatives and similar repositories for low-bit-optimizers:
Users that are interested in low-bit-optimizers are comparing it to the libraries listed below
- β147Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ184Updated last week
- π₯ A minimal training framework for scaling FLA modelsβ107Updated last week
- [ICML'24] The official implementation of βRethinking Optimization and Architecture for Tiny Language Modelsββ121Updated 3 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"β123Updated 11 months ago
- β122Updated 2 months ago
- β219Updated 10 months ago
- Efficient triton implementation of Native Sparse Attention.β139Updated 2 weeks ago
- Odysseus: Playground of LLM Sequence Parallelismβ68Updated 10 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoringβ140Updated 3 weeks ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear atβ¦β100Updated 10 months ago
- β143Updated last year
- Triton-based implementation of Sparse Mixture of Experts.β210Updated 4 months ago
- The official implementation of the EMNLP 2023 paper LLM-FP4β197Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)β102Updated last month
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Trainingβ209Updated 8 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"β116Updated last year
- Triton implementation of FlashAttention2 that adds Custom Masks.β109Updated 8 months ago
- β237Updated 11 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Modelsβ63Updated 6 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.β93Updated last year
- β102Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLMβ159Updated 9 months ago
- PyTorch bindings for CUTLASS grouped GEMM.β81Updated 5 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Modelsβ279Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ94Updated last week
- β43Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.β67Updated 4 months ago
- PB-LLM: Partially Binarized Large Language Modelsβ151Updated last year
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"β81Updated 10 months ago