qiuzh20 / gated_attentionLinks
The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
☆783Updated 3 weeks ago
Alternatives and similar repositories for gated_attention
Users that are interested in gated_attention are comparing it to the libraries listed below
Sorting:
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆135Updated last week
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆129Updated 10 months ago
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆445Updated this week
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆421Updated 4 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆234Updated last year
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆791Updated 5 months ago
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆85Updated last year
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆468Updated last year
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆105Updated 7 months ago
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆476Updated 5 months ago
- Minimal Mamba-2 implementation in PyTorch☆241Updated last year
- When it comes to optimizers, it's always better to be safe than sorry☆399Updated 3 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆423Updated 3 months ago
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆433Updated 2 months ago
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)☆178Updated 2 years ago
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆446Updated 2 months ago
- ☆200Updated 2 years ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆234Updated 3 months ago
- ☆307Updated last month
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆134Updated last year
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆277Updated last year
- [CVPR'25] MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization☆47Updated 5 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆581Updated 11 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆231Updated 3 months ago
- ☆78Updated 11 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆153Updated 6 months ago
- ☆89Updated 8 months ago
- DeepSeek Native Sparse Attention pytorch implementation☆111Updated last month
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆499Updated 2 months ago
- The official GitHub repo for the survey paper "A Survey on Diffusion Language Models".☆639Updated 3 weeks ago