zhuhanqing / APOLLO
APOLLO: SGD-like Memory, AdamW-level Performance
β195Updated 3 weeks ago
Alternatives and similar repositories for APOLLO:
Users that are interested in APOLLO are comparing it to the libraries listed below
- [ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2β189Updated last week
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ242Updated 6 months ago
- The official implementation of Self-Play Preference Optimization (SPPO)β508Updated 2 months ago
- ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inferenceβ153Updated 4 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ956Updated 2 months ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ218Updated 5 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Modelsβ249Updated 2 weeks ago
- [ICLR 2025] BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsβ36Updated last month
- Recipes to train the self-rewarding reasoning LLMs.β207Updated 3 weeks ago
- β118Updated last month
- Fast and memory-efficient exact attentionβ67Updated 3 weeks ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Predictionβ83Updated 4 months ago
- Low-bit optimizers for PyTorchβ125Updated last year
- The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Modelsβ579Updated last month
- Codebase for Iterative DPO Using Rule-based Rewardsβ227Updated last month
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projectionβ91Updated last month
- XAttention: Block Sparse Attention with Antidiagonal Scoringβ102Updated this week
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.β65Updated 3 months ago
- π₯ A minimal training framework for scaling FLA modelsβ82Updated last week
- Efficient triton implementation of Native Sparse Attention.β116Updated this week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ168Updated last month
- adds Sequence Parallelism into LLaMA-Factoryβ432Updated this week
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Mergingβ133Updated last week
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoninβ¦β162Updated 3 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Modelsβ85Updated 10 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLMβ157Updated 8 months ago
- Mixed precision inference by Tensorrt-LLMβ79Updated 5 months ago
- Support mixed-precsion inference with vllmβ80Updated 2 months ago
- The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.β185Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.β196Updated 8 months ago