zhuhanqing / APOLLOLinks
APOLLO: SGD-like Memory, AdamW-level Performance
β232Updated last month
Alternatives and similar repositories for APOLLO
Users that are interested in APOLLO are comparing it to the libraries listed below
Sorting:
- [ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2β208Updated 2 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ251Updated 9 months ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ223Updated 8 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ203Updated 2 weeks ago
- The official implementation of Self-Play Preference Optimization (SPPO)β563Updated 4 months ago
- π₯ A minimal training framework for scaling FLA modelsβ146Updated 3 weeks ago
- XAttention: Block Sparse Attention with Antidiagonal Scoringβ158Updated 3 weeks ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Modelsβ258Updated 2 months ago
- Efficient triton implementation of Native Sparse Attention.β155Updated last week
- β195Updated 3 weeks ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Predictionβ89Updated 7 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inferenceβ193Updated last month
- A sparse attention kernel supporting mix sparse patternsβ219Updated 3 months ago
- β129Updated 3 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,098Updated 5 months ago
- Support mixed-precsion inference with vllmβ83Updated 4 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projectionβ116Updated 3 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.β74Updated 5 months ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMsβ102Updated this week
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.β85Updated this week
- Low-bit optimizers for PyTorchβ128Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ107Updated 2 weeks ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attentionβs Periodic Extension for Length Generalizationβ69Updated 4 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear atβ¦β101Updated 11 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?β107Updated 7 months ago
- This repository contains the training code of ParetoQ introduced in our work "ParetoQ Scaling Laws in Extremely Low-bit LLM Quantization"β64Updated this week
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantizationβ136Updated last week
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>β128Updated last week
- Triton-based implementation of Sparse Mixture of Experts.β216Updated 6 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMsβ166Updated last week