google-research / long-range-arenaLinks
Long Range Arena for Benchmarking Efficient Transformers
☆764Updated last year
Alternatives and similar repositories for long-range-arena
Users that are interested in long-range-arena are comparing it to the libraries listed below
Sorting:
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆797Updated last year
- Pytorch library for fast transformer implementations☆1,730Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,148Updated 3 years ago
- ☆380Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆542Updated last year
- Sequence modeling with Mega.☆300Updated 2 years ago
- Implementation of https://srush.github.io/annotated-s4☆502Updated 2 months ago
- Transformers for Longer Sequences☆617Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆549Updated 4 years ago
- Implementation of Linformer for Pytorch☆298Updated last year
- An implementation of local windowed attention for language modeling☆475Updated last month
- Fully featured implementation of Routing Transformer☆297Updated 3 years ago
- maximal update parametrization (µP)☆1,594Updated last year
- ☆361Updated last year
- Library for 8-bit optimizers and quantization routines.☆778Updated 3 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,182Updated 2 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆376Updated 4 years ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆808Updated 2 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆445Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆871Updated last year
- My take on a practical implementation of Linformer for Pytorch.☆419Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆267Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆330Updated 3 years ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆586Updated 2 weeks ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated 2 years ago
- Prune a model while finetuning or training.☆403Updated 3 years ago
- ☆255Updated 3 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆749Updated last month
- VQVAEs, GumbelSoftmaxes and friends☆585Updated 3 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆612Updated 2 years ago