OpenNLPLab / ETSC-Exact-Toeplitz-to-SSM-ConversionLinks
[EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz Neural Network with Constant-time Inference Complexity
☆14Updated last year
Alternatives and similar repositories for ETSC-Exact-Toeplitz-to-SSM-Conversion
Users that are interested in ETSC-Exact-Toeplitz-to-SSM-Conversion are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- ☆20Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated 10 months ago
- Code for NeurIPS 2023 paper "Non-autoregressive Machine Translation with Probabilistic Context-free Grammar".☆11Updated last year
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆79Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 2 years ago
- ☆32Updated last year
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated last year
- sigma-MoE layer☆20Updated last year
- ☆14Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆61Updated last year
- ☆20Updated last year
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆40Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆37Updated last year
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆22Updated 2 months ago
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆17Updated 8 months ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆23Updated last year
- ☆48Updated last year
- ☆27Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆27Updated 4 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 9 months ago
- ☆11Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆17Updated last year
- ☆21Updated 2 years ago
- Combining SOAP and MUON☆16Updated 5 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last month
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆12Updated 2 years ago