zhuhanqing / APOLLOLinks
APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention
☆265Updated 2 weeks ago
Alternatives and similar repositories for APOLLO
Users that are interested in APOLLO are comparing it to the libraries listed below
Sorting:
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆269Updated 3 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆273Updated last year
- 🔥 A minimal training framework for scaling FLA models☆319Updated last month
- ☆155Updated 10 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆278Updated 9 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆101Updated 11 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆251Updated 4 months ago
- Low-bit optimizers for PyTorch☆134Updated 2 years ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆276Updated 7 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆154Updated 3 weeks ago
- ☆132Updated 6 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆222Updated 6 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆256Updated 5 months ago
- Efficient triton implementation of Native Sparse Attention.☆254Updated 6 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆148Updated 9 months ago
- Efficient LLM Inference over Long Sequences☆393Updated 5 months ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆177Updated 2 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆82Updated last year
- ☆204Updated last year
- The official implementation of Self-Play Preference Optimization (SPPO)☆583Updated 10 months ago
- The evaluation framework for training-free sparse attention in LLMs☆106Updated 2 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆397Updated 3 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆158Updated 2 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆198Updated 2 weeks ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- ☆85Updated last month
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆445Updated 7 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆155Updated last year
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆119Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆418Updated 2 months ago