minjechoi / SOCKETLinks
The official repo for SocKET: Social Knowledge Evaluation Tests
☆24Updated 3 months ago
Alternatives and similar repositories for SOCKET
Users that are interested in SOCKET are comparing it to the libraries listed below
Sorting:
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆38Updated last year
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆81Updated last month
- Detect hallucinated tokens for conditional sequence generation.☆64Updated 3 years ago
- Code for Editing Factual Knowledge in Language Models☆139Updated 3 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆42Updated last year
- ☆77Updated last year
- Data and code for the paper "The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems"☆20Updated 2 years ago
- A curated list of research papers and resources on Cultural LLM.☆48Updated 11 months ago
- Codebase, data and models for the SummaC paper in TACL☆99Updated 7 months ago
- Website for release of TellMeWhy dataset for why question answering☆14Updated 2 years ago
- [EMNLP 2022] TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models☆73Updated last year
- Codes and Datasets for our ACL 2023 paper on cognitive reframing of negative thoughts☆64Updated last year
- Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Textual Style Transfer☆35Updated 2 years ago
- ☆50Updated 2 years ago
- First explanation metric (diagnostic report) for text generation evaluation☆62Updated 5 months ago
- templates and other documents regarding responsible NLP research☆70Updated 2 years ago
- ☆58Updated 3 years ago
- FRANK: Factuality Evaluation Benchmark☆58Updated 2 years ago
- TBC☆27Updated 2 years ago
- Benchmarking Generalization to New Tasks from Natural Language Instructions☆26Updated 4 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆77Updated 2 years ago
- ☆35Updated 3 years ago
- Codes for "Benchmarking the Generation of Fact Checking Explanations"☆10Updated last year
- ☆41Updated 2 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆112Updated 3 years ago
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆79Updated last year
- Repository for the Bias Benchmark for QA dataset.☆127Updated last year
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆79Updated 4 years ago
- [ACL 2020] Towards Debiasing Sentence Representations☆66Updated 2 years ago