McGill-NLP / bias-bench
ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.
☆114Updated 11 months ago
Related projects: ⓘ
- ☆101Updated last year
- This repository contains the data and code introduced in the paper "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Maske…☆97Updated 6 months ago
- [ACL 2020] Towards Debiasing Sentence Representations☆59Updated last year
- Code and test data for "On Measuring Bias in Sentence Encoders", to appear at NAACL 2019.☆53Updated 3 years ago
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆63Updated 3 years ago
- Repository for the Bias Benchmark for QA dataset.☆83Updated 8 months ago
- StereoSet: Measuring stereotypical bias in pretrained language models☆165Updated last year
- [ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models☆58Updated last year
- Codebase, data and models for the SummaC paper in TACL☆80Updated 8 months ago
- ☆87Updated 2 years ago
- This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".☆85Updated 3 years ago
- ☆37Updated last year
- ☆33Updated last year
- Code for Editing Factual Knowledge in Language Models☆134Updated 2 years ago
- code associated with ACL 2021 DExperts paper☆109Updated last year
- ☆44Updated 5 months ago
- Repository for research in the field of Responsible NLP at Meta.☆180Updated last month
- Data for evaluating gender bias in coreference resolution systems.☆65Updated 5 years ago
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆80Updated 3 years ago
- ☆27Updated 3 years ago
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆37Updated 9 months ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆67Updated last week
- ☆35Updated last year
- ☆164Updated last month
- Framework for controlling demographic biases in NLG (using adversarial prompts)☆19Updated last year
- ☆95Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆83Updated 2 years ago
- UnQovering Stereotyping Biases via Underspecified Questions - EMNLP 2020 (Findings)☆19Updated 3 years ago
- [EMNLP 2022] TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models☆65Updated 4 months ago
- ☆70Updated 10 months ago