nyu-mll / BBQLinks
Repository for the Bias Benchmark for QA dataset.
☆129Updated last year
Alternatives and similar repositories for BBQ
Users that are interested in BBQ are comparing it to the libraries listed below
Sorting:
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆82Updated 4 years ago
- This repository contains the data and code introduced in the paper "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Maske…☆126Updated last year
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆150Updated 2 months ago
- ☆156Updated 2 years ago
- ☆116Updated last year
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆14Updated last year
- ☆221Updated 4 years ago
- ☆57Updated 2 years ago
- UnQovering Stereotyping Biases via Underspecified Questions - EMNLP 2020 (Findings)☆21Updated 4 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆116Updated 8 months ago
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆25Updated 8 months ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆400Updated 7 months ago
- ☆189Updated 4 months ago
- Repository for research in the field of Responsible NLP at Meta.☆202Updated 6 months ago
- ☆28Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆60Updated last year
- ☆46Updated last month
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆84Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆107Updated last year
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆98Updated 2 years ago
- Data for evaluating gender bias in coreference resolution systems.☆81Updated 6 years ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆164Updated 2 years ago
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆81Updated 2 weeks ago
- ☆85Updated 10 months ago
- A Survey of Hallucination in Large Foundation Models☆55Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆38Updated last year
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆523Updated last year