google-research / long-range-arenaView external linksLinks
Long Range Arena for Benchmarking Efficient Transformers
☆777Dec 16, 2023Updated 2 years ago
Alternatives and similar repositories for long-range-arena
Users that are interested in long-range-arena are comparing it to the libraries listed below
Sorting:
- Pytorch library for fast transformer implementations☆1,761Mar 23, 2023Updated 2 years ago
- Structured state space sequence models☆2,838Jul 17, 2024Updated last year
- Transformers for Longer Sequences☆631Sep 1, 2022Updated 3 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,172Feb 2, 2022Updated 4 years ago
- Longformer: The Long-Document Transformer☆2,184Feb 8, 2023Updated 3 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Nov 2, 2020Updated 5 years ago
- Fast Block Sparse Matrices for Pytorch☆550Jan 21, 2021Updated 5 years ago
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- ☆221Jun 8, 2020Updated 5 years ago
- Korean Nested Named Entity Corpus☆20May 13, 2023Updated 2 years ago
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,395Jan 26, 2026Updated 2 weeks ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,800Updated this week
- Hopfield Networks is All You Need☆1,897Apr 23, 2023Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,397Apr 26, 2025Updated 9 months ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Apr 18, 2022Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Apr 19, 2022Updated 3 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Aug 26, 2023Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Jun 10, 2021Updated 4 years ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,123Apr 20, 2022Updated 3 years ago
- ☆314Jan 8, 2025Updated last year
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆470Jun 22, 2022Updated 3 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆167Feb 12, 2024Updated 2 years ago
- FairSeq repo with Apollo optimizer☆114Dec 20, 2023Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆279Apr 5, 2022Updated 3 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,607Aug 12, 2020Updated 5 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,924Feb 14, 2023Updated 2 years ago
- ☆533Feb 13, 2024Updated 2 years ago
- Cascaded Text Generation with Markov Transformers☆130Mar 20, 2023Updated 2 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆791Apr 24, 2023Updated 2 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 ty…☆650Jan 4, 2023Updated 3 years ago
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated 10 months ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,491Jan 14, 2026Updated 3 weeks ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆359Feb 22, 2022Updated 3 years ago
- ☆388Oct 18, 2023Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆827May 5, 2024Updated last year