i-gallegos / Fair-LLM-Benchmark
☆124Updated last year
Alternatives and similar repositories for Fair-LLM-Benchmark:
Users that are interested in Fair-LLM-Benchmark are comparing it to the libraries listed below
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆131Updated 2 months ago
- Repository for the Bias Benchmark for QA dataset.☆100Updated last year
- ☆47Updated last year
- This repository contains the data and code introduced in the paper "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Maske…☆111Updated 11 months ago
- [ACL 2020] Towards Debiasing Sentence Representations☆64Updated 2 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆79Updated 5 months ago
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆70Updated 3 years ago
- ☆25Updated 4 months ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆31Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆77Updated 9 months ago
- ☆15Updated last year
- ☆104Updated 9 months ago
- ☆37Updated last year
- Code and test data for "On Measuring Bias in Sentence Encoders", to appear at NAACL 2019.☆54Updated 3 years ago
- tianlu-wang / Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-ModelsNAACL 2022 Findings☆15Updated 2 years ago
- Repository for research in the field of Responsible NLP at Meta.☆194Updated 2 months ago
- ☆29Updated 2 years ago
- [ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models☆60Updated 2 years ago
- A resource repository for representation engineering in large language models☆102Updated 3 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆29Updated last year
- A framework for assessing and improving classification fairness.☆34Updated last year
- ☆154Updated 8 months ago
- ☆41Updated last year
- StereoSet: Measuring stereotypical bias in pretrained language models☆172Updated 2 years ago
- ☆22Updated 4 months ago
- ☆38Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆84Updated last week
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆84Updated 5 months ago
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆39Updated last year
- UnQovering Stereotyping Biases via Underspecified Questions - EMNLP 2020 (Findings)☆20Updated 3 years ago