facebookresearch / adaptive-span
Transformer training code for sequential tasks
☆610Updated 3 years ago
Alternatives and similar repositories for adaptive-span:
Users that are interested in adaptive-span are comparing it to the libraries listed below
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,110Updated 2 years ago
- Implementation of Universal Transformer in Pytorch☆259Updated 6 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 2 years ago
- ☆395Updated 6 years ago
- Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"☆578Updated 5 years ago
- Fully featured implementation of Routing Transformer☆289Updated 3 years ago
- Simple XLNet implementation with Pytorch Wrapper☆583Updated 5 years ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,180Updated 3 years ago
- Sequence-to-Sequence learning using PyTorch☆522Updated 5 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆258Updated 3 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,552Updated 4 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆407Updated 7 months ago
- [ICLR'19] Trellis Networks for Sequence Modeling☆471Updated 5 years ago
- ☆213Updated 4 years ago
- Code for the Eager Translation Model from the paper You May Not Need Attention☆295Updated 6 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆371Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆606Updated 7 months ago
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction☆502Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆546Updated 4 years ago
- Neural Text Generation with Unlikelihood Training☆309Updated 3 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆467Updated 4 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆227Updated 5 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆426Updated 8 months ago
- FastFormers - highly efficient transformer models for NLU☆704Updated last year
- ☆312Updated 2 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆471Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆412Updated 2 years ago
- An implementation of DeepMind's Relational Recurrent Neural Networks (NeurIPS 2018) in PyTorch.☆245Updated 6 years ago