google-research-datasets / seegullLinks
SeeGULL is a broad-coverage stereotype dataset in English containing stereotypes about identity groups spanning 178 countries across 8 different geo-political regions across 6 continents, as well as state-level identities within the US and India.
☆36Updated 2 years ago
Alternatives and similar repositories for seegull
Users that are interested in seegull are comparing it to the libraries listed below
Sorting:
- Resources for cultural NLP research☆103Updated last week
- Repository for research in the field of Responsible NLP at Meta.☆202Updated 4 months ago
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆70Updated last year
- ☆65Updated 2 years ago
- Code and Resources for the paper, "Better to Ask in English: Cross-Lingual Evaluation of Large Language Models for Healthcare Queries"☆17Updated last year
- StereoSet: Measuring stereotypical bias in pretrained language models☆191Updated 2 years ago
- A curated list of research papers and resources on Cultural LLM.☆49Updated last year
- A python package for benchmarking interpretability techniques on Transformers.☆213Updated last year
- A reading list of up-to-date papers on NLP for Social Good.☆305Updated 2 years ago
- Interpretability for sequence generation models 🐛 🔍☆438Updated 2 weeks ago
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆179Updated 3 years ago
- Efficiently find the best-suited language model (LM) for your NLP task☆127Updated 2 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆133Updated last year
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 3 years ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆97Updated 2 years ago
- ☆35Updated 2 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆224Updated 10 months ago
- FBI: Finding Blindspots in LLM Evaluations with Interpretable Checklists☆29Updated last month
- Detecting Bias and ensuring Fairness in AI solutions☆101Updated 2 years ago
- ☆37Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆95Updated 2 years ago
- An instruction-based benchmark for text improvements.☆142Updated 2 years ago
- ☆41Updated 2 years ago
- Collection of NLP model explanations and accompanying analysis tools☆144Updated 2 years ago
- A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.☆20Updated 5 months ago
- Minimalist BERT implementation assignment for CS11-711☆83Updated 3 years ago
- Benchmarking Large Language Models☆99Updated 3 months ago
- Machine learning models from Singapore's NLP research community☆36Updated 2 years ago
- OpenNyAI is a mission aimed at developing open source software and datasets to catalyze the creation of AI-powered solutions to improve a…☆41Updated last year
- A Python library that encapsulates various methods for neuron interpretation and analysis in Deep NLP models.☆105Updated 2 years ago