zhuhanqing / APOLLO
APOLLO: SGD-like Memory, AdamW-level Performance
☆81Updated 2 weeks ago
Alternatives and similar repositories for APOLLO:
Users that are interested in APOLLO are comparing it to the libraries listed below
- When it comes to optimizers, it's always better to be safe than sorry☆157Updated this week
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆55Updated 2 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆188Updated 2 weeks ago
- ☆107Updated 3 months ago
- ☆74Updated last year
- Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆73Updated 2 weeks ago
- Here we will test various linear attention designs.☆58Updated 8 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 8 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆23Updated 4 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆92Updated 4 months ago
- Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆52Updated 3 weeks ago
- Normalized Transformer (nGPT)☆145Updated last month
- Implementation of Infini-Transformer in Pytorch☆107Updated 2 weeks ago
- Low-bit optimizers for PyTorch☆125Updated last year
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆33Updated 6 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆99Updated 7 months ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆76Updated last month
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆115Updated 4 months ago
- Implementation of the proposed MaskBit from Bytedance AI☆71Updated 2 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆148Updated last month
- An algorithm for static activation quantization of LLMs☆107Updated this week
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆66Updated 2 months ago
- The official implementation of Tensor ProducT ATTenTion Transformer (T6)☆108Updated this week
- Code for studying the super weight in LLM☆68Updated last month
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆110Updated last month
- Explorations into the recently proposed Taylor Series Linear Attention☆91Updated 4 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆27Updated 9 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆28Updated 7 months ago
- ☆69Updated 4 months ago
- ☆38Updated 11 months ago