zhuhanqing / APOLLOLinks
APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention
β258Updated 6 months ago
Alternatives and similar repositories for APOLLO
Users that are interested in APOLLO are comparing it to the libraries listed below
Sorting:
- [ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2β261Updated 2 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ270Updated last year
- π₯ A minimal training framework for scaling FLA modelsβ308Updated last week
- β148Updated 9 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizerβ206Updated 5 months ago
- β130Updated 5 months ago
- Efficient triton implementation of Native Sparse Attention.β248Updated 5 months ago
- Low-bit optimizers for PyTorchβ132Updated 2 years ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ254Updated 4 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inferenceβ272Updated 6 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ249Updated 3 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.β98Updated 11 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projectionβ146Updated 9 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ154Updated last month
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>β150Updated 4 months ago
- The evaluation framework for training-free sparse attention in LLMsβ103Updated last month
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMsβ171Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"β175Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Modelsβ80Updated last year
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Modelsβ275Updated 8 months ago
- β254Updated 5 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMsβ194Updated last month
- Efficient LLM Inference over Long Sequencesβ390Updated 4 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantizationβ164Updated 3 weeks ago
- Fast and memory-efficient exact attentionβ74Updated 8 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleβ376Updated 2 months ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. π The official implementation of https://arxβ¦β29Updated 9 months ago
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocksβ37Updated 9 months ago
- β918Updated this week
- Normalized Transformer (nGPT)β192Updated last year