lucidrains / distilled-retriever-pytorchLinks
Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"
☆32Updated 4 years ago
Alternatives and similar repositories for distilled-retriever-pytorch
Users that are interested in distilled-retriever-pytorch are comparing it to the libraries listed below
Sorting:
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- ☆62Updated 3 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- ☆32Updated 3 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- PyTorch code for the EMNLP 2020 paper "Embedding Words in Non-Vector Space with Unsupervised Graph Learning"☆41Updated 4 years ago
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 4 years ago
- a Pytorch implementation of the Reformer Network (https://openreview.net/pdf?id=rkgNKkHtvB)☆53Updated 2 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 4 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 4 years ago
- ☆29Updated 3 years ago
- Source code for the EMNLP 2020 long paper <Token-level Adaptive Training for Neural Machine Translation>.☆20Updated 2 years ago
- Factorization of the neural parameter space for zero-shot multi-lingual and multi-task transfer☆39Updated 4 years ago
- ☆22Updated 4 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated 2 years ago
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021☆33Updated 4 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 4 years ago
- ☆219Updated 5 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- Code for our EMNLP 2020 Paper "AIN: Fast and Accurate Sequence Labeling with Approximate Inference Network"☆19Updated 2 years ago
- Code for SIGDial 2019 Best Paper: Structured Fusion Networks for Dialog https://arxiv.org/abs/1907.10016☆31Updated 6 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆22Updated 2 years ago
- ☆53Updated 4 years ago
- CCQA A New Web-Scale Question Answering Dataset for Model Pre-Training☆32Updated 3 years ago