openai / sparse_attention
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
☆1,524Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for sparse_attention
- Reformer, the efficient Transformer, in Pytorch☆2,116Updated last year
- Pytorch library for fast transformer implementations☆1,642Updated last year
- Transformer training code for sequential tasks☆609Updated 3 years ago
- ☆3,609Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,093Updated 2 years ago
- An open source framework for seq2seq models in PyTorch.☆1,498Updated last year
- Single Headed Attention RNN - "Stop thinking with your head"☆1,178Updated 2 years ago
- Longformer: The Long-Document Transformer☆2,046Updated last year
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆599Updated 3 months ago
- My take on a practical implementation of Linformer for Pytorch.☆407Updated 2 years ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,106Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆695Updated 6 months ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,509Updated 3 years ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,108Updated 2 years ago
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,960Updated 2 years ago
- On the Variance of the Adaptive Learning Rate and Beyond☆2,536Updated 3 years ago
- PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM☆1,259Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆727Updated 10 months ago
- DeLighT: Very Deep and Light-Weight Transformers☆466Updated 4 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 2 years ago
- Efficient GPU kernels for block-sparse matrix multiplication and convolution☆1,025Updated last year
- Mesh TensorFlow: Model Parallelism Made Easier☆1,591Updated 11 months ago
- A lightweight library for PyTorch training tools and utilities☆1,663Updated this week
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,589Updated 2 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆547Updated 4 years ago
- Source code for "On the Relationship between Self-Attention and Convolutional Layers"☆1,085Updated last year
- Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"☆577Updated 5 years ago
- Fully featured implementation of Routing Transformer☆284Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆545Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆253Updated 3 years ago