Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
☆1,611Aug 12, 2020Updated 5 years ago
Alternatives and similar repositories for sparse_attention
Users that are interested in sparse_attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Efficient GPU kernels for block-sparse matrix multiplication and convolution☆1,064Jun 8, 2023Updated 2 years ago
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- Code for the paper, "Distribution Augmentation for Generative Modeling", ICML 2020.☆132Apr 24, 2023Updated 2 years ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,178May 28, 2023Updated 2 years ago
- ☆3,697Sep 21, 2022Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,928Feb 14, 2023Updated 3 years ago
- Code for the Eager Translation Model from the paper You May Not Need Attention☆294Dec 17, 2018Updated 7 years ago
- Longformer: The Long-Document Transformer☆2,190Feb 8, 2023Updated 3 years ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,284Jan 25, 2019Updated 7 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,502Jan 14, 2026Updated 2 months ago
- Reformer, the efficient Transformer, in Pytorch☆2,190Jun 21, 2023Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,201Sep 30, 2025Updated 6 months ago
- Pytorch library for fast transformer implementations☆1,767Mar 23, 2023Updated 3 years ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆17,149Jun 2, 2023Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)☆5,628Updated this week
- Lingvo☆2,860Mar 30, 2026Updated last week
- [ICLR'19] Trellis Networks for Sequence Modeling☆471Aug 20, 2019Updated 6 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,522Aug 9, 2021Updated 4 years ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,113Jan 4, 2022Updated 4 years ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,947Updated this week
- An optimizer that trains as fast as Adam and as good as SGD.☆2,907Jul 23, 2023Updated 2 years ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,257Mar 7, 2024Updated 2 years ago
- An open-source NLP research library, built on PyTorch.☆11,893Nov 22, 2022Updated 3 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,991Feb 12, 2022Updated 4 years ago
- Generate embeddings from large-scale graph-structured data.☆3,459Mar 3, 2024Updated 2 years ago
- Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"☆3,182Jul 23, 2024Updated last year
- Single Headed Attention RNN - "Stop thinking with your head"☆1,181Nov 27, 2021Updated 4 years ago
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆24,753Aug 14, 2024Updated last year
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,373Mar 23, 2024Updated 2 years ago
- Ongoing research training transformer models at scale☆15,985Updated this week
- PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM☆1,264Feb 12, 2022Updated 4 years ago
- Conditional Transformer Language Model for Controllable Generation☆1,882May 1, 2025Updated 11 months ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆7,001Oct 14, 2025Updated 5 months ago
- Fast Block Sparse Matrices for Pytorch☆550Jan 21, 2021Updated 5 years ago
- PyTorch extensions for high performance and large scale training.☆3,404Apr 26, 2025Updated 11 months ago
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,667Apr 16, 2024Updated last year
- A natural language modeling framework based on PyTorch☆6,304Oct 17, 2022Updated 3 years ago
- The author's officially unofficial PyTorch BigGAN implementation.☆2,928Jul 19, 2023Updated 2 years ago
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆246Jan 28, 2020Updated 6 years ago