Long Range Arena for Benchmarking Efficient Transformers
☆788Dec 16, 2023Updated 2 years ago
Alternatives and similar repositories for long-range-arena
Users that are interested in long-range-arena are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Pytorch library for fast transformer implementations☆1,767Mar 23, 2023Updated 3 years ago
- Structured state space sequence models☆2,875Jul 17, 2024Updated last year
- Transformers for Longer Sequences☆633Sep 1, 2022Updated 3 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,177Feb 2, 2022Updated 4 years ago
- Longformer: The Long-Document Transformer☆2,190Feb 8, 2023Updated 3 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Nov 2, 2020Updated 5 years ago
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆550Jan 21, 2021Updated 5 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Apr 18, 2022Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆61Apr 19, 2022Updated 3 years ago
- ☆220Jun 8, 2020Updated 5 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆51Jun 11, 2025Updated 10 months ago
- FairSeq repo with Apollo optimizer☆113Dec 20, 2023Updated 2 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆166Feb 12, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆62Sep 3, 2025Updated 7 months ago
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,456Apr 9, 2026Updated last week
- ☆317Jan 8, 2025Updated last year
- PyTorch extensions for high performance and large scale training.☆3,405Apr 26, 2025Updated 11 months ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆113Jun 10, 2021Updated 4 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,816Mar 27, 2026Updated 2 weeks ago
- Korean Nested Named Entity Corpus☆20May 13, 2023Updated 2 years ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,128Apr 20, 2022Updated 3 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆470Jun 22, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Hopfield Networks is All You Need☆1,916Apr 23, 2023Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Aug 26, 2023Updated 2 years ago
- Implementation of https://srush.github.io/annotated-s4☆515Jun 20, 2025Updated 9 months ago
- ☆390Oct 18, 2023Updated 2 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,930Feb 14, 2023Updated 3 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- Cascaded Text Generation with Markov Transformers☆130Mar 20, 2023Updated 3 years ago
- ☆3,697Sep 21, 2022Updated 3 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Pytorch implementation of Compressive Transformers, from Deepmind☆164Oct 4, 2021Updated 4 years ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,225Jul 19, 2024Updated last year
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated last year
- Reformer, the efficient Transformer, in Pytorch☆2,189Jun 21, 2023Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆825May 5, 2024Updated last year
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago