[ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning
☆150Feb 25, 2026Updated 2 months ago
Alternatives and similar repositories for forgetting-transformer
Users that are interested in forgetting-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆130Feb 4, 2026Updated 3 months ago
- Stick-breaking attention☆63Jul 1, 2025Updated 10 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- ☆48Jun 16, 2025Updated 10 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆558Mar 13, 2026Updated last month
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆57Mar 31, 2026Updated last month
- Combining SOAP and MUON☆20Feb 11, 2025Updated last year
- ☆63Jun 12, 2025Updated 10 months ago
- ☆136Jun 6, 2025Updated 11 months ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 3 years ago
- ☆33Dec 31, 2025Updated 4 months ago
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 8 months ago
- ☆70Jul 8, 2025Updated 10 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆57Dec 4, 2024Updated last year
- ☆45Nov 1, 2025Updated 6 months ago
- 🔥 A minimal training framework for scaling FLA models☆385Apr 22, 2026Updated 2 weeks ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 10 months ago
- ☆114Feb 25, 2025Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Oct 11, 2024Updated last year
- 🚀 Efficient implementations for emerging model architectures☆5,032May 1, 2026Updated last week
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆376Dec 12, 2024Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆56Feb 24, 2026Updated 2 months ago
- Triton implement of bi-directional (non-causal) linear attention☆75Mar 1, 2026Updated 2 months ago
- ☆20May 30, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated last year
- ☆69Mar 21, 2025Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated 2 years ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆112Oct 11, 2025Updated 6 months ago
- ☆19Dec 4, 2025Updated 5 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Official Repo for Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics☆73Mar 26, 2026Updated last month
- Official repo of paper LM2☆47Feb 13, 2025Updated last year
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Nov 15, 2025Updated 5 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆112Jun 2, 2025Updated 11 months ago
- ☆36Mar 7, 2025Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆344Feb 23, 2025Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago