[EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer
☆64Jul 30, 2023Updated 2 years ago
Alternatives and similar repositories for Transnormer
Users that are interested in Transnormer are comparing it to the libraries listed below
Sorting:
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆81Apr 24, 2024Updated last year
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆252Jan 23, 2024Updated 2 years ago
- [CVPR 2023] Official implementation of our paper - Learning Audio-Visual Source Localization via False Negative Aware Contrastive Learnin…☆27Apr 10, 2023Updated 2 years ago
- ☆20Apr 17, 2023Updated 2 years ago
- ☆29May 4, 2024Updated last year
- ☆15Mar 22, 2023Updated 2 years ago
- [TPAMI 2023] This is an official implementation for "Vicinity Vision Transformer".☆22Jun 15, 2023Updated 2 years ago
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- Implementation of Cascaded Head-colliding Attention (ACL'2021)☆11Sep 16, 2021Updated 4 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- ☆13Feb 7, 2023Updated 3 years ago
- ☆29Jul 9, 2024Updated last year
- ☆62Jun 17, 2024Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated last year
- Implementation and experiments for Partially Supervised NER via Expected Entity Ratio in TACL 2022☆14Nov 7, 2022Updated 3 years ago
- ☆52Jan 19, 2023Updated 3 years ago
- ☆14Nov 20, 2022Updated 3 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆36Feb 26, 2024Updated 2 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆198Dec 2, 2022Updated 3 years ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆70Feb 22, 2026Updated last week
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆117Mar 16, 2024Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆48Jun 22, 2024Updated last year
- 🔥 A minimal training framework for scaling FLA models☆350Nov 15, 2025Updated 3 months ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Oct 22, 2023Updated 2 years ago
- Language-agnostic BERT Sentence Embedding (LaBSE) Pytorch Model☆21Sep 2, 2020Updated 5 years ago
- ☆33Apr 12, 2021Updated 4 years ago
- Long Context Extension and Generalization in LLMs☆63Sep 21, 2024Updated last year
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Description☆76Dec 4, 2023Updated 2 years ago
- ☆20Dec 16, 2020Updated 5 years ago
- Statistical discontinuous constituent parsing☆11Feb 15, 2018Updated 8 years ago
- ☆10Oct 2, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Sep 30, 2024Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago