facebookresearch / adaptive-spanLinks
Transformer training code for sequential tasks
☆611Updated 4 years ago
Alternatives and similar repositories for adaptive-span
Users that are interested in adaptive-span are comparing it to the libraries listed below
Sorting:
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,115Updated 3 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆746Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆330Updated 3 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,184Updated 3 years ago
- ☆396Updated 6 years ago
- Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction☆502Updated 4 years ago
- Simple XLNet implementation with Pytorch Wrapper☆581Updated 6 years ago
- Implementation of Universal Transformer in Pytorch☆263Updated 6 years ago
- Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"☆580Updated 6 years ago
- Neural Text Generation with Unlikelihood Training☆309Updated 4 years ago
- ☆219Updated 5 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆376Updated 4 years ago
- Sequence-to-Sequence learning using PyTorch☆521Updated 5 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆472Updated 3 years ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- ☆324Updated 2 years ago
- Minimal tutorial on packing and unpacking sequences in pytorch☆210Updated 6 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆227Updated 4 years ago
- Fully featured implementation of Routing Transformer☆297Updated 3 years ago
- 🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP☆1,194Updated 2 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,587Updated 5 years ago
- Visualization for Sequential Neural Networks with Attention☆458Updated 2 years ago
- Training Transformer-XL on 128 GPUs☆140Updated 5 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- FastFormers - highly efficient transformer models for NLU☆707Updated 5 months ago
- DeLighT: Very Deep and Light-Weight Transformers☆468Updated 4 years ago
- Code for the Eager Translation Model from the paper You May Not Need Attention☆295Updated 6 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆445Updated last year
- Unsupervised Neural Machine Translation☆474Updated 5 years ago
- Fast BPE☆676Updated last year