openai / sparse_attention
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
☆1,524Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for sparse_attention
- Reformer, the efficient Transformer, in Pytorch☆2,121Updated last year
- Pytorch library for fast transformer implementations☆1,643Updated last year
- ☆3,612Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,098Updated 2 years ago
- Transformer training code for sequential tasks☆609Updated 3 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,511Updated 3 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,178Updated 2 years ago
- Mesh TensorFlow: Model Parallelism Made Easier☆1,591Updated last year
- Longformer: The Long-Document Transformer☆2,047Updated last year
- An open source framework for seq2seq models in PyTorch.☆1,498Updated last year
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,108Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆729Updated 11 months ago
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,960Updated 2 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,892Updated last year
- My take on a practical implementation of Linformer for Pytorch.☆407Updated 2 years ago
- Efficient GPU kernels for block-sparse matrix multiplication and convolution☆1,027Updated last year
- Make huge neural nets fit in memory☆2,730Updated 4 years ago
- On the Variance of the Adaptive Learning Rate and Beyond☆2,535Updated 3 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆698Updated 6 months ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 2 years ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,160Updated 5 years ago
- PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM☆1,259Updated 2 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆547Updated 4 years ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,239Updated 8 months ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆690Updated 3 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆598Updated 4 months ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,105Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆467Updated 4 years ago
- Lingvo☆2,816Updated this week
- Source code for "On the Relationship between Self-Attention and Convolutional Layers"☆1,085Updated last year