openai / sparse_attentionLinks
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
☆1,577Updated 4 years ago
Alternatives and similar repositories for sparse_attention
Users that are interested in sparse_attention are comparing it to the libraries listed below
Sorting:
- Reformer, the efficient Transformer, in Pytorch☆2,170Updated 2 years ago
- Pytorch library for fast transformer implementations☆1,718Updated 2 years ago
- ☆3,658Updated 2 years ago
- Transformer training code for sequential tasks☆612Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆757Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,132Updated 3 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,182Updated 3 years ago
- An open source framework for seq2seq models in PyTorch.☆1,509Updated last month
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,974Updated 3 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆609Updated 11 months ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,098Updated 3 years ago
- Efficient GPU kernels for block-sparse matrix multiplication and convolution☆1,041Updated 2 years ago
- Longformer: The Long-Document Transformer☆2,134Updated 2 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,509Updated 3 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,912Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆414Updated 2 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆557Updated 4 years ago
- A PyTorch implementation of the NIPS 2017 paper "Dynamic Routing Between Capsules".☆1,743Updated 6 years ago
- PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM☆1,261Updated 3 years ago
- Lingvo☆2,843Updated this week
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆775Updated last year
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,113Updated 3 years ago
- Unsupervised Data Augmentation (UDA)☆2,192Updated 3 years ago
- Simple XLNet implementation with Pytorch Wrapper☆581Updated 5 years ago
- Fully featured implementation of Routing Transformer☆295Updated 3 years ago
- On the Variance of the Adaptive Learning Rate and Beyond☆2,549Updated 3 years ago
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,116Updated 2 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 3 years ago
- Source code for "On the Relationship between Self-Attention and Convolutional Layers"☆1,103Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆469Updated 4 years ago