facebookresearch / adaptive-span
Transformer training code for sequential tasks
☆609Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for adaptive-span
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,108Updated 2 years ago
- ☆394Updated 6 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆745Updated 2 years ago
- Implementation of Universal Transformer in Pytorch☆258Updated 6 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,178Updated 2 years ago
- Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"☆578Updated 5 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆407Updated 3 months ago
- ☆212Updated 4 years ago
- Code for the Eager Translation Model from the paper You May Not Need Attention☆293Updated 5 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆470Updated 2 years ago
- Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA☆720Updated 5 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆598Updated 4 months ago
- [ICLR'19] Trellis Networks for Sequence Modeling☆473Updated 5 years ago
- ☆310Updated 2 years ago
- Simple XLNet implementation with Pytorch Wrapper☆577Updated 5 years ago
- Fully featured implementation of Routing Transformer☆284Updated 3 years ago
- 🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP☆1,191Updated last year
- Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction☆501Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆407Updated 2 years ago
- Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning☆311Updated 4 years ago
- Latent Alignment and Variational Attention☆326Updated 6 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,524Updated 4 years ago
- Dynamic Meta-Embeddings for Improved Sentence Representations☆332Updated 4 years ago
- A companion repository for the "Writing code for NLP Research" Tutorial at EMNLP 2018☆555Updated 6 years ago
- A repository containing tutorials for practical NLP using PyTorch☆530Updated 5 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆250Updated 3 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆416Updated 4 months ago
- FastFormers - highly efficient transformer models for NLU☆701Updated 10 months ago