sustcsonglin / linear-attention-and-beyond-slides
☆69Updated 2 months ago
Alternatives and similar repositories for linear-attention-and-beyond-slides:
Users that are interested in linear-attention-and-beyond-slides are comparing it to the libraries listed below
- Stick-breaking attention☆52Updated last month
- 🔥 A minimal training framework for scaling FLA models☆117Updated this week
- ☆39Updated last month
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆97Updated 3 weeks ago
- ☆77Updated 2 weeks ago
- Efficient triton implementation of Native Sparse Attention.☆142Updated 3 weeks ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆84Updated 7 months ago
- ☆80Updated 3 weeks ago
- Fast and memory-efficient exact attention☆68Updated 2 months ago
- ☆91Updated 7 months ago
- ☆20Updated last month
- Flash-Muon: An Efficient Implementation of Muon Optimzer☆91Updated this week
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆40Updated this week
- ☆31Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆116Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆165Updated 4 months ago
- Triton implement of bi-directional (non-causal) linear attention☆46Updated 3 months ago
- ☆37Updated last year
- "Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiw…☆29Updated 11 months ago
- Here we will test various linear attention designs.☆60Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆58Updated last month
- ☆103Updated last year
- ☆126Updated 2 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆142Updated last month
- Triton implementation of FlashAttention2 that adds Custom Masks.☆110Updated 8 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆45Updated 2 weeks ago
- Some preliminary explorations of Mamba's context scaling.☆213Updated last year
- Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆72Updated last week
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆60Updated 3 months ago
- ☆78Updated 8 months ago