qiuzh20 / gated_attentionLinks
The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
☆101Updated last month
Alternatives and similar repositories for gated_attention
Users that are interested in gated_attention are comparing it to the libraries listed below
Sorting:
- ☆106Updated 2 months ago
- ☆61Updated 4 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆105Updated last month
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆67Updated 4 months ago
- ☆120Updated 5 months ago
- ☆45Updated last month
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆135Updated 4 months ago
- ☆26Updated 3 months ago
- "Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiw…☆30Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆82Updated 4 months ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- The official github repo for "Training Optimal Large Diffusion Language Models", the first-ever large-scale diffusion language models sca…☆41Updated last week
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 4 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆52Updated last year
- ☆101Updated 2 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆122Updated 7 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated last month
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated last year
- A Sober Look at Language Model Reasoning☆87Updated last month
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆97Updated 4 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆56Updated 8 months ago
- ☆23Updated 11 months ago
- Stick-breaking attention☆61Updated 4 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆98Updated 10 months ago
- ☆19Updated 10 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆47Updated 4 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆88Updated 11 months ago