lifu-tu / Study-NLP-RobustnessLinks
Code for TACL 2020 paper "An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models"
☆14Updated 5 years ago
Alternatives and similar repositories for Study-NLP-Robustness
Users that are interested in Study-NLP-Robustness are comparing it to the libraries listed below
Sorting:
- Code and datasets for the EMNLP 2020 paper "Calibration of Pre-trained Transformers"☆61Updated 2 years ago
- Code for the paper "Factorising Meaning and Form for Intent-Preserving Paraphrasing", Tom Hosking & Mirella Lapata (ACL 2021)☆27Updated 2 years ago
- Debiasing Methods in Natural Language Understanding Make Bias More Accessible: Code and Data☆14Updated 3 years ago
- NILE : Natural Language Inference with Faithful Natural Language Explanations☆30Updated 2 years ago
- ☆42Updated 4 years ago
- [EMNLP 2020] Collective HumAn OpinionS on Natural Language Inference Data☆40Updated 3 years ago
- ☆58Updated 3 years ago
- This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".☆88Updated 4 years ago
- Data and code for our paper "Exploring and Predicting Transferability across NLP Tasks", to appear at EMNLP 2020.☆49Updated 4 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- Code accompanying our papers on the "Generative Distributional Control" framework☆118Updated 2 years ago
- [ACL 2020] Towards Debiasing Sentence Representations☆66Updated 3 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆98Updated 3 years ago
- Automatic metrics for GEM tasks☆67Updated 3 years ago
- ☆47Updated last year
- ☆24Updated 4 years ago