[ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning
☆146Feb 25, 2026Updated last month
Alternatives and similar repositories for forgetting-transformer
Users that are interested in forgetting-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆126Feb 4, 2026Updated last month
- Stick-breaking attention☆63Jul 1, 2025Updated 8 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- ☆48Jun 16, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆524Mar 13, 2026Updated 2 weeks ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆56Updated this week
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- ☆63Jun 12, 2025Updated 9 months ago
- ☆133Jun 6, 2025Updated 9 months ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 3 years ago
- ☆31Dec 31, 2025Updated 2 months ago
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 7 months ago
- ☆68Jul 8, 2025Updated 8 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆57Dec 4, 2024Updated last year
- 🔥 A minimal training framework for scaling FLA models☆358Nov 15, 2025Updated 4 months ago
- ☆45Nov 1, 2025Updated 4 months ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆247Jun 15, 2025Updated 9 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆132Jun 24, 2025Updated 9 months ago
- ☆110Feb 25, 2025Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Oct 11, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆375Dec 12, 2024Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆55Feb 24, 2026Updated last month
- Triton implement of bi-directional (non-causal) linear attention☆73Mar 1, 2026Updated 3 weeks ago
- ☆68Mar 21, 2025Updated last year
- ☆20May 30, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Oct 11, 2025Updated 5 months ago
- ☆19Dec 4, 2025Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Official Repo for Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics☆72Jan 13, 2026Updated 2 months ago
- Official repo of paper LM2☆47Feb 13, 2025Updated last year
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Nov 15, 2025Updated 4 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆110Jun 2, 2025Updated 9 months ago
- ☆36Mar 7, 2025Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago