facebookresearch / adaptive-span
Transformer training code for sequential tasks
☆611Updated 3 years ago
Alternatives and similar repositories for adaptive-span:
Users that are interested in adaptive-span are comparing it to the libraries listed below
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,112Updated 3 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆607Updated 9 months ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,183Updated 3 years ago
- Implementation of Universal Transformer in Pytorch☆259Updated 6 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆413Updated 2 years ago
- Fully featured implementation of Routing Transformer☆291Updated 3 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆472Updated 2 years ago
- 🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP☆1,194Updated last year
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,569Updated 4 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆432Updated 10 months ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- ☆394Updated 6 years ago
- ☆218Updated 4 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆374Updated 4 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆467Updated 4 years ago
- FastFormers - highly efficient transformer models for NLU☆706Updated last month
- Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"☆578Updated 5 years ago
- [ICLR'19] Trellis Networks for Sequence Modeling☆472Updated 5 years ago
- Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning☆311Updated 4 years ago
- Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction☆502Updated 3 years ago
- Simple XLNet implementation with Pytorch Wrapper☆582Updated 5 years ago
- Sequence-to-Sequence learning using PyTorch☆522Updated 5 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆262Updated 3 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆407Updated 9 months ago
- Neural Text Generation with Unlikelihood Training☆309Updated 3 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- A companion repository for the "Writing code for NLP Research" Tutorial at EMNLP 2018☆555Updated 6 years ago
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,119Updated 2 years ago