zhuhanqing / APOLLOLinks
APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention
☆267Updated 2 months ago
Alternatives and similar repositories for APOLLO
Users that are interested in APOLLO are comparing it to the libraries listed below
Sorting:
- Low-bit optimizers for PyTorch☆137Updated 2 years ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆276Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆229Updated 7 months ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆275Updated 5 months ago
- Efficient triton implementation of Native Sparse Attention.☆261Updated 8 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆284Updated 10 months ago
- 🔥 A minimal training framework for scaling FLA models☆341Updated 2 months ago
- ☆158Updated 11 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆251Updated last year
- ☆133Updated 8 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Updated 5 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆203Updated last month
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆104Updated last year
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆119Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆110Updated 3 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆280Updated 8 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆451Updated 8 months ago
- ☆234Updated last year
- ☆163Updated 7 months ago
- ☆85Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆161Updated 3 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆153Updated 11 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆428Updated 4 months ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆240Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆187Updated 4 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆339Updated 11 months ago
- ☆269Updated 7 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆163Updated last year