thu-ml / low-bit-optimizers
Low-bit optimizers for PyTorch
β128Updated last year
Alternatives and similar repositories for low-bit-optimizers
Users that are interested in low-bit-optimizers are comparing it to the libraries listed below
Sorting:
- π₯ A minimal training framework for scaling FLA modelsβ128Updated last week
- β128Updated 3 months ago
- β147Updated last year
- Efficient triton implementation of Native Sparse Attention.β144Updated last month
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Trainingβ209Updated 8 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear atβ¦β101Updated 11 months ago
- [ICML'24] The official implementation of βRethinking Optimization and Architecture for Tiny Language Modelsββ121Updated 4 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ190Updated 3 weeks ago
- β220Updated 11 months ago
- The official implementation of the EMNLP 2023 paper LLM-FP4β198Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)β103Updated last month
- XAttention: Block Sparse Attention with Antidiagonal Scoringβ146Updated last month
- QuIP quantizationβ52Updated last year
- Odysseus: Playground of LLM Sequence Parallelismβ69Updated 10 months ago
- Reorder-based post-training quantization for large language modelβ190Updated last year
- PB-LLM: Partially Binarized Large Language Modelsβ152Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLMβ161Updated 10 months ago
- PyTorch bindings for CUTLASS grouped GEMM.β89Updated 2 weeks ago
- β45Updated last year
- β103Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"β123Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"β28Updated last year
- β146Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"β116Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantizationβ133Updated 3 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.β111Updated 9 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)β154Updated last month
- Triton-based implementation of Sparse Mixture of Experts.β214Updated 5 months ago
- β190Updated last week
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Promptsβ39Updated last year