TianjinYellow / SPAM-OptimizerLinks
☆35Updated 6 months ago
Alternatives and similar repositories for SPAM-Optimizer
Users that are interested in SPAM-Optimizer are comparing it to the libraries listed below
Sorting:
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆125Updated last month
- ☆13Updated 8 months ago
- ☆85Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆53Updated 9 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆228Updated 5 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆185Updated 3 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆128Updated last year
- Official Pytorch Implementation of "The Curse of Depth in Large Language Models" by Wenfang Sun, Xinyuan Song, Pengxiang Li, Lu Yin,Yefen…☆55Updated last month
- Triton implement of bi-directional (non-causal) linear attention☆54Updated 7 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆31Updated 2 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆105Updated last week
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆91Updated 8 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated 11 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- The evaluation framework for training-free sparse attention in LLMs☆93Updated 2 months ago
- Here we will test various linear attention designs.☆62Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆80Updated 2 months ago
- Official code for the paper "Attention as a Hypernetwork"☆41Updated last year
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆31Updated 5 months ago
- ☆84Updated 6 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆33Updated last year
- Unofficial Implementation of Selective Attention Transformer☆17Updated 10 months ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆20Updated 2 months ago
- Work in progress.☆72Updated 2 months ago
- M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models☆39Updated last month
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 6 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆52Updated 6 months ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆26Updated last month
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆90Updated 2 weeks ago