Linear Attention Sequence Parallelism (LASP)
☆88Jun 4, 2024Updated last year
Alternatives and similar repositories for LASP
Users that are interested in LASP are comparing it to the libraries listed below
Sorting:
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆252Jan 23, 2024Updated 2 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Jul 30, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Token Omission Via Attention☆127Oct 13, 2024Updated last year
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- 🔥 A minimal training framework for scaling FLA models☆357Nov 15, 2025Updated 4 months ago
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆133Dec 3, 2024Updated last year
- ☆17Jun 11, 2025Updated 9 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- Make any iterator or iterable abortable via an AbortSignal☆16Dec 6, 2024Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆71Mar 1, 2026Updated 2 weeks ago
- Ring attention implementation with flash attention☆996Sep 10, 2025Updated 6 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 9 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated 2 years ago
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated 11 months ago
- ☆31Dec 31, 2025Updated 2 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆262Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- ☆16Dec 9, 2023Updated 2 years ago
- ☆71Jul 11, 2024Updated last year
- Large Context Attention☆769Oct 13, 2025Updated 5 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- ☆52Jul 18, 2024Updated last year
- [ACL 24 Findings] Implementation of Resonance RoPE and the PosGen synthetic dataset.☆24Mar 5, 2024Updated 2 years ago
- ☆68Jul 8, 2025Updated 8 months ago
- ☆136May 29, 2025Updated 9 months ago
- ☆46Nov 10, 2023Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago