google-research-datasets / seegullLinks
SeeGULL is a broad-coverage stereotype dataset in English containing stereotypes about identity groups spanning 178 countries across 8 different geo-political regions across 6 continents, as well as state-level identities within the US and India.
☆37Updated 2 years ago
Alternatives and similar repositories for seegull
Users that are interested in seegull are comparing it to the libraries listed below
Sorting:
- Resources for cultural NLP research☆110Updated 2 months ago
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆71Updated last year
- Repository for research in the field of Responsible NLP at Meta.☆204Updated 7 months ago
- ☆65Updated 2 years ago
- A collaborative project to collect datasets in SEA languages, SEA regions, or SEA cultures.☆93Updated 10 months ago
- Code and Resources for the paper, "Better to Ask in English: Cross-Lingual Evaluation of Large Language Models for Healthcare Queries"☆19Updated last year
- A curated list of research papers and resources on Cultural LLM.☆52Updated last year
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- StereoSet: Measuring stereotypical bias in pretrained language models☆194Updated 3 years ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆97Updated 2 years ago
- Code repository for "Introducing Airavata: Hindi Instruction-tuned LLM"☆61Updated last year
- Detecting Bias and ensuring Fairness in AI solutions☆102Updated 2 years ago
- TimeLMs: Diachronic Language Models from Twitter☆111Updated last year
- A curated list of awesome datasets with human label variation (un-aggregated labels) in Natural Language Processing and Computer Vision, …☆97Updated last year
- A reading list of up-to-date papers on NLP for Social Good.☆304Updated 2 years ago
- ☆118Updated last year
- Interpretability for sequence generation models 🐛 🔍☆449Updated last week
- A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.☆20Updated 8 months ago
- Ensembling Hugging Face transformers made easy☆61Updated 2 years ago
- Tools for evaluating the performance of MT metrics on data from recent WMT metrics shared tasks.☆121Updated 2 months ago
- ☆43Updated 2 years ago
- ☆37Updated 4 months ago
- ☆224Updated 4 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆135Updated last year
- IndicGenBench is a high-quality, multilingual, multi-way parallel benchmark for evaluating Large Language Models (LLMs) on 4 user-facing …☆56Updated last year
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆180Updated 3 years ago
- Experiments with representation engineering☆13Updated last year
- FBI: Finding Blindspots in LLM Evaluations with Interpretable Checklists☆30Updated 4 months ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- Efficiently find the best-suited language model (LM) for your NLP task☆132Updated 4 months ago