MilaNLProc / honestLinks
A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.
☆20Updated 7 months ago
Alternatives and similar repositories for honest
Users that are interested in honest are comparing it to the libraries listed below
Sorting:
- StereoSet: Measuring stereotypical bias in pretrained language models☆192Updated 2 years ago
- Repository for research in the field of Responsible NLP at Meta.☆202Updated 6 months ago
- This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".☆88Updated 4 years ago
- A repository with several curated datasets of counter-narratives to fight online hate speech.☆93Updated 4 months ago
- A reading list of up-to-date papers on NLP for Social Good.☆304Updated 2 years ago
- ☆90Updated 3 years ago
- To analyze and remove gender bias in coreference resolution systems☆79Updated 6 months ago
- A curated list of awesome datasets with human label variation (un-aggregated labels) in Natural Language Processing and Computer Vision, …☆94Updated last year
- ☆41Updated 2 years ago
- Data for evaluating gender bias in coreference resolution systems.☆81Updated 6 years ago
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆150Updated 3 months ago
- Code for the paper "Measuring Bias in Contextualized Word Representations"☆35Updated 6 years ago
- Dataset + classifier tools to study social perception biases in natural language generation☆70Updated 2 years ago
- A Python library that encapsulates various methods for neuron interpretation and analysis in Deep NLP models.☆106Updated 2 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆50Updated 3 years ago
- Code and Data for Evaluation WG☆42Updated 3 years ago
- A Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation, Levy et al., Findings of EMNLP 2021☆14Updated 3 years ago
- Code to reproduce data for Bias in Bios☆47Updated 2 years ago
- Röttger et al. (ACL 2021): "HateCheck: Functional Tests for Hate Speech Detection Models" - Data☆59Updated last month
- Code for our WOAH@ACL 2021 Paper on Data Integration for Toxic Comment Classification: Making More Than 40 Datasets Easily Accessible in …☆29Updated 3 years ago
- This repository holds the code for my master thesis entitles "The Association of Gender Bias with BERT - Measuring, Mitigating and Cross-…