bytedance / ParaGenLinks
ParaGen is a PyTorch deep learning framework for parallel sequence generation.
☆186Updated 2 years ago
Alternatives and similar repositories for ParaGen
Users that are interested in ParaGen are comparing it to the libraries listed below
Sorting:
- Introduction to CPM☆165Updated 3 years ago
- Code for CPM-2 Pre-Train☆158Updated 2 years ago
- Finetune CPM-2☆82Updated 2 years ago
- ☆168Updated 3 years ago
- ☆120Updated 3 years ago
- ☆53Updated 3 years ago
- Finetune CPM-1☆75Updated 2 years ago
- Pretrain CPM-1☆53Updated 4 years ago
- Implementation of "Glancing Transformer for Non-Autoregressive Neural Machine Translation"☆137Updated 2 years ago
- ☆252Updated 2 years ago
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆74Updated 2 years ago
- ☆219Updated 2 years ago
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting☆122Updated 2 years ago
- FLASHQuad_pytorch☆67Updated 3 years ago
- A PyTorch-based model pruning toolkit for pre-trained language models☆388Updated last year
- 中文图书语料MD5链接☆216Updated last year
- This is a code repository for the ACL 2022 paper "Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Tra…☆52Updated 3 years ago
- RoFormer升级版☆153Updated 3 years ago
- Code for paper "Vocabulary Learning via Optimal Transport for Neural Machine Translation"☆441Updated 3 years ago
- EVA: Large-scale Pre-trained Chit-Chat Models☆307Updated 2 years ago
- A Dataset for Multi-Turn Dialogue Reasoning☆326Updated 4 years ago
- 零样本学习测评基准,中文版☆57Updated 4 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated last year
- Chinese GPT2: pre-training and fine-tuning framework for text generation☆187Updated 4 years ago
- CTC2021-中文文本纠错大赛的SOTA方案及在线演示☆72Updated 2 years ago
- 香侬科技(北京香侬慧语科技有限责任公司)知乎爆料备份☆42Updated 5 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- A paper list of pre-trained language models (PLMs).☆141Updated 3 years ago
- ☆77Updated 2 years ago