facebookresearch / adaptive-spanLinks
Transformer training code for sequential tasks
☆612Updated 3 years ago
Alternatives and similar repositories for adaptive-span
Users that are interested in adaptive-span are comparing it to the libraries listed below
Sorting:
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,113Updated 3 years ago
- ☆395Updated 6 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,183Updated 3 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 3 years ago
- Fully featured implementation of Routing Transformer☆292Updated 3 years ago
- Implementation of Universal Transformer in Pytorch☆260Updated 6 years ago
- Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"☆580Updated 5 years ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- [ICLR'19] Trellis Networks for Sequence Modeling☆472Updated 5 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆375Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆608Updated 10 months ago
- Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction☆502Updated 4 years ago
- ☆218Updated 4 years ago
- Simple XLNet implementation with Pytorch Wrapper☆581Updated 5 years ago
- My take on a practical implementation of Linformer for Pytorch.☆414Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆467Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆264Updated 3 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,575Updated 4 years ago
- Sequence-to-Sequence learning using PyTorch☆521Updated 5 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆408Updated 10 months ago
- Code for the Eager Translation Model from the paper You May Not Need Attention☆294Updated 6 years ago
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,116Updated 2 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆438Updated 11 months ago
- FastFormers - highly efficient transformer models for NLU☆705Updated 2 months ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆472Updated 2 years ago
- ☆323Updated 2 years ago
- Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA☆723Updated 5 years ago
- Implementations of ideas from recent papers☆393Updated 4 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago