bytedance / ParaGenLinks
ParaGen is a PyTorch deep learning framework for parallel sequence generation.
☆185Updated 3 years ago
Alternatives and similar repositories for ParaGen
Users that are interested in ParaGen are comparing it to the libraries listed below
Sorting:
- ☆167Updated 4 years ago
- Introduction to CPM☆165Updated 4 years ago
- Implementation of "Glancing Transformer for Non-Autoregressive Neural Machine Translation"☆137Updated 2 years ago
- ☆120Updated 4 years ago
- Finetune CPM-1☆75Updated 2 years ago
- Code for CPM-2 Pre-Train☆158Updated 2 years ago
- Finetune CPM-2☆81Updated 2 years ago
- Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting☆124Updated 3 years ago
- ☆220Updated 3 years ago
- A PyTorch-based model pruning toolkit for pre-trained language models☆388Updated 2 years ago
- Code for paper "Vocabulary Learning via Optimal Transport for Neural Machine Translation"☆442Updated 3 years ago
- ☆54Updated 3 years ago
- FLASHQuad_pytorch☆68Updated 3 years ago
- RoFormer升级版☆154Updated 3 years ago
- ☆254Updated 3 years ago
- This is a code repository for the ACL 2022 paper "Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Tra…☆52Updated 3 years ago
- Code for ACL 2020 paper "Rigid Formats Controlled Text Generation":https://www.aclweb.org/anthology/2020.acl-main.68/☆236Updated 4 years ago
- Chinese GPT2: pre-training and fine-tuning framework for text generation☆187Updated 4 years ago
- Tracking the progress in NLG for task-oriented dialogue system (resources, code, and new frontiers etc.)☆135Updated 3 years ago
- A unified tokenization tool for Images, Chinese and English.☆153Updated 2 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆316Updated 2 years ago
- A Dataset for Multi-Turn Dialogue Reasoning☆333Updated 5 years ago
- Pretrain CPM-1☆52Updated 4 years ago
- A library for building hierarchical text representation and corresponding downstream applications.☆79Updated last year
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆265Updated last year
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆76Updated 3 years ago
- Codes for the paper "Instantaneous Grammatical Error Correction with Shallow Aggressive Decoding" (ACL-IJCNLP 2021)☆41Updated 4 years ago
- ☆78Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago