THUDM / FewNLULinks
☆66Updated 3 years ago
Alternatives and similar repositories for FewNLU
Users that are interested in FewNLU are comparing it to the libraries listed below
Sorting:
- EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering☆70Updated 3 years ago
- ☆117Updated 3 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 3 years ago
- source code for paper: WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach.☆55Updated 4 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated last year
- Source code for "A Simple but Effective Pluggable Entity Lookup Table for Pre-trained Language Models"☆44Updated 2 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆91Updated 3 years ago
- Codes for our ACL21 paper: Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization☆94Updated 3 years ago
- [ACL 2020] Structure-Level Knowledge Distillation For Multilingual Sequence Labeling☆71Updated 2 years ago
- Code and Models for the paper "End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering" (NeurIPS 20…☆109Updated 3 years ago
- The sources codes of the DR-BERT model and baselines☆38Updated 3 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- Code and data for "Retrieval Enhanced Model for Commonsense Generation" (ACL-IJCNLP 2021).☆28Updated 3 years ago
- ☆78Updated 3 years ago
- ☆63Updated 2 years ago
- SNCSE: Contrastive Learning for Unsupervised Sentence Embedding with Soft Negative Samples☆76Updated 2 years ago
- [EACL'21] Non-Autoregressive with Pretrained Language Model☆62Updated 2 years ago
- [ACL 2022] Ditch the Gold Standard: Re-evaluating Conversational Question Answering☆45Updated 3 years ago
- ☆92Updated 3 years ago
- This is the repository for SemEval 2021 Task 4: Reading Comprehension of Abstract Meaning. It includes code for baseline models and data.☆29Updated 3 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"☆18Updated 2 years ago
- The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach …☆63Updated 4 years ago
- ☆71Updated 3 years ago
- Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations☆132Updated this week
- Source code for paper on commonsense reasoning for 2020 Annual Conference of the Association for Computational Linguistics (ACL) 2020.☆29Updated 11 months ago
- Paradigm shift in natural language processing☆42Updated 3 years ago
- ☆35Updated 3 years ago
- Source code for EMNLP 2020 paper "Coreferential Reasoning Learning for Language Representation"☆68Updated 2 years ago
- This project maintains a reading list for general text generation tasks☆65Updated 3 years ago