tianlu-wang / Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-Models
NAACL 2022 Findings
☆15Updated 2 years ago
Alternatives and similar repositories for Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-Models:
Users that are interested in Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-Models are comparing it to the libraries listed below
- [ACL 2020] Towards Debiasing Sentence Representations☆65Updated 2 years ago
- ☆26Updated 2 years ago
- ☆26Updated 4 years ago
- ☆89Updated 2 years ago
- ☆22Updated 6 months ago
- A framework for assessing and improving classification fairness.☆33Updated last year
- ☆11Updated 3 years ago
- ☆44Updated last year
- Implementation for https://arxiv.org/abs/2005.00652☆28Updated 2 years ago
- ☆17Updated 4 years ago
- ☆48Updated last year
- ☆30Updated 3 years ago
- A codebase for ACL 2023 paper: Mitigating Label Biases for In-context Learning☆10Updated last year
- UnQovering Stereotyping Biases via Underspecified Questions - EMNLP 2020 (Findings)☆22Updated 3 years ago
- Code for preprint: Summarizing Differences between Text Distributions with Natural Language☆42Updated 2 years ago
- ☆63Updated 4 years ago
- ☆26Updated last year
- Can Large Language Models Be an Alternative to Human Evaluations?☆9Updated last year
- The source code of "Empowering Language Understanding with Counterfactual Reasoning" (ACL'21)☆11Updated 3 years ago
- Code for the paper "Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers"☆17Updated 4 years ago
- ☆11Updated 3 years ago
- Code and test data for "On Measuring Bias in Sentence Encoders", to appear at NAACL 2019.☆54Updated 3 years ago
- ☆9Updated last year
- NAACL 2022: Can Rationalization Improve Robustness? https://arxiv.org/abs/2204.11790☆27Updated 2 years ago
- Code for the paper "Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias"☆76Updated 3 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆21Updated 4 years ago
- ☆30Updated 2 years ago
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆37Updated last year
- ☆31Updated 11 months ago
- Constrained Decoding Project☆17Updated last year