qiuzh20 / gated_attentionView external linksLinks
The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
☆842Dec 20, 2025Updated last month
Alternatives and similar repositories for gated_attention
Users that are interested in gated_attention are comparing it to the libraries listed below
Sorting:
- Physics of Language Models: Part 4.2, Canon Layers at Scale where Synthetic Pretraining Resonates in Reality☆317Jan 5, 2026Updated last month
- "Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiw…☆31May 7, 2024Updated last year
- 🔥 A minimal training framework for scaling FLA models☆344Nov 15, 2025Updated 3 months ago
- Triton implement of bi-directional (non-causal) linear attention☆65Feb 2, 2026Updated 2 weeks ago
- ☆129Jun 6, 2025Updated 8 months ago
- Stick-breaking attention☆62Jul 1, 2025Updated 7 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆246Sep 12, 2025Updated 5 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆965Feb 5, 2026Updated last week
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆137Dec 19, 2025Updated last month
- ☆63Jun 12, 2025Updated 8 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Code release for paper "Test-Time Training Done Right"☆370Jan 5, 2026Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆119Jan 27, 2026Updated 3 weeks ago
- Long Context Extension and Generalization in LLMs☆62Sep 21, 2024Updated last year
- Official PyTorch Implementation of "Latent Denoising Makes Good Visual Tokenizers"☆172Dec 17, 2025Updated 2 months ago
- ☆113Sep 13, 2025Updated 5 months ago
- Official repository for "Vid2World: Crafting Video Diffusion Models to Interactive World Models" (ICLR 2026), https://arxiv.org/abs/2505.…☆38Jan 27, 2026Updated 3 weeks ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- ☆26Feb 4, 2026Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 8 months ago
- ☆131May 29, 2025Updated 8 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆155Jul 8, 2025Updated 7 months ago
- ☆22Dec 15, 2023Updated 2 years ago
- [ICCV 2025] Official implementation of the paper: REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers☆451Dec 6, 2025Updated 2 months ago
- Towards Scalable Pre-training of Visual Tokenizers for Generation☆441Dec 16, 2025Updated 2 months ago
- Normalized Transformer (nGPT)☆198Nov 19, 2024Updated last year
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,051Apr 3, 2025Updated 10 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Oct 11, 2025Updated 4 months ago
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,544Mar 16, 2025Updated 11 months ago
- MotionCrafter: Dense Geometry and Motion Reconstruction with a 4D VAE☆48Feb 10, 2026Updated last week
- [ICCV 2025 Workshop Outstanding Paper Award] VChain: Chain-of-Visual-Thought for Reasoning in Video Generation☆116Oct 7, 2025Updated 4 months ago
- Block-Recurrent Dynamics in ViTs 🦖☆30Dec 24, 2025Updated last month
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Understand and test language model architectures on synthetic tasks.☆252Jan 12, 2026Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,188Sep 30, 2025Updated 4 months ago
- This repository includes the official implementation of our paper "Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generat…☆244Oct 12, 2025Updated 4 months ago
- Compression for Foundation Models☆35Jul 21, 2025Updated 6 months ago