zhixuan-lin / forgetting-transformer
[ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"
☆76Updated last week
Alternatives and similar repositories for forgetting-transformer:
Users that are interested in forgetting-transformer are comparing it to the libraries listed below
- Triton implement of bi-directional (non-causal) linear attention☆44Updated last month
- ☆63Updated last month
- Official code implementation for the work Preference Alignment with Flow Matching (NeurIPS 2024)☆45Updated 4 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆143Updated last week
- Remasking Discrete Diffusion Models with Inference-Time Scaling☆15Updated 2 weeks ago
- ☆26Updated 2 weeks ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆55Updated 10 months ago
- Here we will test various linear attention designs.☆60Updated 11 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated 11 months ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆50Updated 3 months ago
- Implementation of a multimodal diffusion transformer in Pytorch☆101Updated 9 months ago
- The official repo of continuous speculative decoding☆25Updated 4 months ago
- A big_vision inspired repo that implements a generic Auto-Encoder class capable in representation learning and generative modeling.☆34Updated 9 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆95Updated 7 months ago
- The codebase of our paper "Improving the Training of Rectified Flows", NeurIPS 2024☆103Updated 5 months ago
- ☆17Updated 2 months ago
- DPO, but faster 🚀☆40Updated 3 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆53Updated 7 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 2 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated 11 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆42Updated last week
- Combining SOAP and MUON☆13Updated last month
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆44Updated 2 weeks ago