tianlu-wang / Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-ModelsLinks
NAACL 2022 Findings
☆15Updated 3 years ago
Alternatives and similar repositories for Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-Models
Users that are interested in Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-Models are comparing it to the libraries listed below
Sorting:
- [ACL 2020] Towards Debiasing Sentence Representations☆66Updated 2 years ago
- ☆89Updated 3 years ago
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆137Updated 5 months ago
- ☆31Updated 3 years ago
- ☆11Updated 3 years ago
- Code and test data for "On Measuring Bias in Sentence Encoders", to appear at NAACL 2019.☆55Updated 4 years ago
- ☆26Updated 2 years ago
- A codebase for ACL 2023 paper: Mitigating Label Biases for In-context Learning☆10Updated last year
- Implementation for https://arxiv.org/abs/2005.00652☆28Updated 2 years ago
- ☆133Updated last year
- ☆26Updated 4 years ago
- Code for the paper "Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias"☆77Updated 3 years ago
- Code for the paper "Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers"☆17Updated 4 years ago
- ☆9Updated last year
- ☆24Updated 8 months ago
- ☆17Updated 4 years ago
- Contextualized Perturbation for Textual Adversarial Attack, NAACL 2021☆43Updated 3 years ago
- [ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models☆61Updated 2 years ago
- UnQovering Stereotyping Biases via Underspecified Questions - EMNLP 2020 (Findings)☆22Updated 3 years ago
- Probing for Labeled Dependency Trees (ACL 2022) + Sorting LMs by Structure (NAACL 2022)☆8Updated 11 months ago
- ☆50Updated last year
- ☆30Updated 4 years ago
- ☆44Updated last year
- ☆27Updated last year
- ☆11Updated 3 years ago
- ☆25Updated 3 years ago
- ☆10Updated last year
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆21Updated 4 years ago
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆80Updated 4 years ago
- A framework for assessing and improving classification fairness.☆33Updated last year