linydub / azureml-greenai-txtsum
Samples for fine-tuning HuggingFace models with AzureML
☆10Updated 2 years ago
Related projects: ⓘ
- Lite Self-Training☆29Updated last year
- ☆25Updated last year
- ☆36Updated last month
- ☆65Updated 2 years ago
- ☆37Updated last year
- ☆116Updated 2 years ago
- ☆21Updated 2 years ago
- ☆33Updated last year
- [ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators☆24Updated last year
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆120Updated last year
- ☆15Updated 3 years ago
- BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generat…☆28Updated 2 years ago
- SNCSE: Contrastive Learning for Unsupervised Sentence Embedding with Soft Negative Samples☆73Updated 2 years ago
- a large scientific paraphrase dataset for longer paraphrase generation☆37Updated last year
- ☆55Updated last year
- Emotion-Aware Dialogue Response Generation by Multi-Task Learning☆13Updated 2 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆91Updated 2 years ago
- Knowledge Infused Decoding☆71Updated 8 months ago
- ☆42Updated 2 years ago
- Code for paper Document-Level Paraphrase Generation with Sentence Rewriting and Reordering by Zhe Lin, Yitao Cai and Xiaojun Wan. This pa…☆24Updated 2 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 2 years ago
- ☆70Updated 2 years ago
- Code and dataset "ZEST" from "Learning from task descriptions", Weller et al, EMNLP 2020☆17Updated 3 years ago
- ☆60Updated last year
- ☆77Updated 4 months ago
- This is the code for the EMNLP2020 Finding paper "BERT for Monolingual and Cross-Lingual Reverse Dictionary"☆19Updated 3 years ago
- Generative Retrieval Transformer☆29Updated last year
- An official repository for MIA 2022 (NAACL 2022 Workshop) Shared Task on Cross-lingual Open-Retrieval Question Answering.☆31Updated 2 years ago
- The Multitask Long Document Benchmark☆38Updated last year
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated last year