Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
☆1,611Aug 12, 2020Updated 5 years ago
Alternatives and similar repositories for sparse_attention
Users that are interested in sparse_attention are comparing it to the libraries listed below
Sorting:
- Efficient GPU kernels for block-sparse matrix multiplication and convolution☆1,064Jun 8, 2023Updated 2 years ago
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- Code for the paper, "Distribution Augmentation for Generative Modeling", ICML 2020.☆132Apr 24, 2023Updated 2 years ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,176May 28, 2023Updated 2 years ago
- ☆3,695Sep 21, 2022Updated 3 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,927Feb 14, 2023Updated 3 years ago
- Code for the Eager Translation Model from the paper You May Not Need Attention☆294Dec 17, 2018Updated 7 years ago
- Longformer: The Long-Document Transformer☆2,189Feb 8, 2023Updated 3 years ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,283Jan 25, 2019Updated 7 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,494Jan 14, 2026Updated 2 months ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,190Sep 30, 2025Updated 5 months ago
- Pytorch library for fast transformer implementations☆1,763Mar 23, 2023Updated 2 years ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆17,082Jun 2, 2023Updated 2 years ago
- A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)☆5,623Updated this week
- Lingvo☆2,857Updated this week
- [ICLR'19] Trellis Networks for Sequence Modeling☆473Aug 20, 2019Updated 6 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,522Aug 9, 2021Updated 4 years ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,111Jan 4, 2022Updated 4 years ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,936Updated this week
- An optimizer that trains as fast as Adam and as good as SGD.☆2,909Jul 23, 2023Updated 2 years ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,257Mar 7, 2024Updated 2 years ago
- An open-source NLP research library, built on PyTorch.☆11,893Nov 22, 2022Updated 3 years ago
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,990Feb 12, 2022Updated 4 years ago
- Generate embeddings from large-scale graph-structured data.☆3,459Mar 3, 2024Updated 2 years ago
- Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"☆3,182Jul 23, 2024Updated last year
- Single Headed Attention RNN - "Stop thinking with your head"☆1,181Nov 27, 2021Updated 4 years ago
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆24,707Aug 14, 2024Updated last year
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,370Mar 23, 2024Updated 2 years ago
- Ongoing research training transformer models at scale☆15,744Updated this week
- PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM☆1,264Feb 12, 2022Updated 4 years ago
- Conditional Transformer Language Model for Controllable Generation☆1,884May 1, 2025Updated 10 months ago
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆7,000Oct 14, 2025Updated 5 months ago
- Fast Block Sparse Matrices for Pytorch☆549Jan 21, 2021Updated 5 years ago
- PyTorch extensions for high performance and large scale training.☆3,404Apr 26, 2025Updated 10 months ago
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,654Apr 16, 2024Updated last year
- A natural language modeling framework based on PyTorch☆6,306Oct 17, 2022Updated 3 years ago
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆246Jan 28, 2020Updated 6 years ago
- The author's officially unofficial PyTorch BigGAN implementation.☆2,924Jul 19, 2023Updated 2 years ago
- On the Variance of the Adaptive Learning Rate and Beyond☆2,549Jul 31, 2021Updated 4 years ago