Fraser-Greenlee / T5-VAELinks
Checkout the new version at the link!
☆22Updated 5 years ago
Alternatives and similar repositories for T5-VAE
Users that are interested in T5-VAE are comparing it to the libraries listed below
Sorting:
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"☆40Updated 3 years ago
- ☆62Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 4 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆118Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- ☆22Updated 4 years ago
- ☆70Updated 3 years ago
- ☆34Updated 7 months ago
- On Variational Learning of Controllable Representations for Text without Supervision https://arxiv.org/abs/1905.11975☆27Updated 5 years ago
- Efficient-Sentence-Embedding-using-Discrete-Cosine-Transform☆17Updated 5 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- PyTorch code for the EMNLP 2020 paper "Embedding Words in Non-Vector Space with Unsupervised Graph Learning"☆41Updated 4 years ago
- Implements Reformer: The Efficient Transformer in pytorch.☆86Updated 5 years ago
- ☆50Updated 4 years ago
- Language Modeling Example with Transformers and PyTorch Lighting☆65Updated 5 years ago
- Code for the paper "UnNatural Language Inference" to appear at ACL 2021 (Long Paper)☆36Updated 4 years ago
- Create augmentation examples from MultiNLI by subject-object inversion and passivizing.☆17Updated 4 years ago
- Code accompanying our papers on the "Generative Distributional Control" framework☆118Updated 3 years ago
- Official Code for Towards Transparent and Explainable Attention Models paper (ACL 2020)☆35Updated 3 years ago
- An implementation of masked language modeling for Pytorch, made as concise and simple as possible☆179Updated 2 years ago
- A library for making Transformer Variational Autoencoders. (Extends the Huggingface/transformers library.)☆144Updated 4 years ago
- Official Pytorch implementation of (Roles and Utilization of Attention Heads in Transformer-based Neural Language Models), ACL 2020☆16Updated 9 months ago
- Checking the interpretability of attention on text classification models☆49Updated 6 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆167Updated last year
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- FairSeq repo with Apollo optimizer☆114Updated 2 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 4 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 5 years ago