StereoSet: Measuring stereotypical bias in pretrained language models
☆201Dec 8, 2022Updated 3 years ago
Alternatives and similar repositories for StereoSet
Users that are interested in StereoSet are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆155Aug 18, 2025Updated 8 months ago
- This repository contains the data and code introduced in the paper "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Maske…☆133Mar 1, 2024Updated 2 years ago
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆87Mar 2, 2021Updated 5 years ago
- Code and test data for "On Measuring Bias in Sentence Encoders", to appear at NAACL 2019.☆57May 23, 2021Updated 4 years ago
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆38Dec 14, 2023Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Repository for the Bias Benchmark for QA dataset.☆141Jan 8, 2024Updated 2 years ago
- ☆10Jul 6, 2023Updated 2 years ago
- A PyTorch Implementation of the EMNLP 2020 paper "Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning"☆13Feb 20, 2021Updated 5 years ago
- Dataset + classifier tools to study social perception biases in natural language generation☆72Jun 12, 2023Updated 2 years ago
- [ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models☆61Nov 2, 2022Updated 3 years ago
- A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.☆21Apr 8, 2025Updated last year
- Code for the paper "Measuring Bias in Contextualized Word Representations"☆35Jul 19, 2019Updated 6 years ago
- Framework for controlling demographic biases in NLG (using adversarial prompts)☆21Jun 12, 2023Updated 2 years ago
- Repository for research in the field of Responsible NLP at Meta.☆207Apr 18, 2026Updated last week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Papers on fairness in NLP☆452May 2, 2024Updated last year
- Narrative Understanding Workshop paper (2021) on gender in GPT-3 generated stories☆14May 28, 2021Updated 4 years ago
- To analyze and remove gender bias in coreference resolution systems☆78May 6, 2025Updated 11 months ago
- ☆55Apr 26, 2022Updated 4 years ago
- ☆25Feb 6, 2022Updated 4 years ago
- Official implementation for KDD'22 paper "Learning Fair Representation via Distributional Contrastive Disentanglement"☆23Jun 25, 2022Updated 3 years ago
- replication of Word Embedding Association Test(WEAT), which is suggested in Semantics derived automatically from language corpora necess…☆34Aug 2, 2018Updated 7 years ago
- UnQovering Stereotyping Biases via Underspecified Questions - EMNLP 2020 (Findings)☆21Jul 6, 2021Updated 4 years ago
- Code accompanying the paper "Understanding Bias in Word Embeddings"☆22Dec 8, 2022Updated 3 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆17Mar 6, 2025Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Jul 17, 2024Updated last year
- Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging. Arxiv, 2024.☆16Oct 28, 2024Updated last year
- Butler 是一个用于自动化服务管理和任务调度的工具项目。☆16Apr 19, 2026Updated last week
- This repository holds the code for my master thesis entitles "The Association of Gender Bias with BERT - Measuring, Mitigating and Cross-…☆18Sep 19, 2022Updated 3 years ago
- Code and data for Marked Personas (ACL 2023)☆30May 26, 2023Updated 2 years ago
- Code to reproduce data for Bias in Bios☆49Jun 12, 2023Updated 2 years ago
- ☆19Jun 21, 2025Updated 10 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆133Feb 24, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Learning Gender-Neutral Word Embeddings☆47Oct 3, 2019Updated 6 years ago
- ☆14Jun 25, 2025Updated 10 months ago
- ☆10Sep 13, 2022Updated 3 years ago
- Source code and data for ADEPT: A DEbiasing PrompT Framework (AAAI-23).☆15Dec 13, 2024Updated last year
- [ICLR 2025] A Closer Look at Machine Unlearning for Large Language Models☆48Dec 4, 2024Updated last year
- ☆23Oct 30, 2023Updated 2 years ago
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆19Dec 16, 2024Updated last year