sustcsonglin / linear-attention-and-beyond-slidesLinks
☆76Updated 4 months ago
Alternatives and similar repositories for linear-attention-and-beyond-slides
Users that are interested in linear-attention-and-beyond-slides are comparing it to the libraries listed below
Sorting:
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆89Updated 2 weeks ago
- ☆222Updated last month
- 🔥 A minimal training framework for scaling FLA models☆188Updated last month
- Stick-breaking attention☆58Updated last week
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆115Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆138Updated 3 weeks ago
- Efficient triton implementation of Native Sparse Attention.☆175Updated last month
- ☆116Updated last month
- ☆90Updated 2 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆185Updated 3 months ago
- The evaluation framework for training-free sparse attention in LLMs☆82Updated 3 weeks ago
- ☆51Updated this week
- ☆109Updated last month
- ☆71Updated this week
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆83Updated 6 months ago
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆57Updated 4 months ago
- Fast and memory-efficient exact attention☆68Updated 4 months ago
- Some preliminary explorations of Mamba's context scaling.☆215Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆91Updated last month
- "Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiw…☆29Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆42Updated last week
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆44Updated 2 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆118Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆101Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆96Updated this week
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆86Updated 3 weeks ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 9 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆78Updated last month
- ☆105Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆85Updated 7 months ago