haorannlp / mixLinks
Code for "Mixed Cross Entropy Loss for Neural Machine Translation"
☆20Updated 4 years ago
Alternatives and similar repositories for mix
Users that are interested in mix are comparing it to the libraries listed below
Sorting:
- Source code for the EMNLP 2020 long paper <Token-level Adaptive Training for Neural Machine Translation>.☆20Updated 2 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 3 years ago
- [EACL'21] Non-Autoregressive with Pretrained Language Model☆62Updated 2 years ago
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semanti…☆21Updated 3 years ago
- Source code for <Sequence-Level Training for Non-Autoregressive Neural Machine Translation>.☆24Updated 3 years ago
- Implementation of our paper "Self-training Sampling with Monolingual Data Uncertainty for Neural Machine Translation" to appear in ACL-20…☆31Updated 4 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆54Updated 2 years ago
- Open-Retrieval Conversational Machine Reading: A new setting & OR-ShARC dataset☆13Updated 2 years ago
- Code of the COLING22 paper "uChecker: Masked Pretrained Language Models as Unsupervised Chinese Spelling Checkers"☆19Updated 3 years ago
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 4 years ago
- ☆18Updated 4 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 3 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- Code for the paper "Partially-Aligned Data-to-Text Generation with Distant Supervision" in EMNLP 2020.☆19Updated 4 years ago
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Updated 3 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 2 years ago
- [IJCAI'19] Code for "Self-attentive Biaffine Dependency Parsing"☆16Updated 6 years ago
- Lite Self-Training☆29Updated 2 years ago
- ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing☆31Updated 3 years ago
- ☆12Updated 3 years ago
- ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation☆25Updated 4 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆47Updated 3 years ago
- ☆15Updated 3 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"☆18Updated 2 years ago
- Source code for "Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation"☆18Updated 5 years ago
- ☆51Updated 5 years ago
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting☆17Updated 3 years ago
- [ACL'21 Findings] Why Machine Reading Comprehension Models Learn Shortcuts?☆16Updated 2 years ago
- ☆29Updated 3 years ago