jifan-chen / subquestions-for-fact-checking
Code and dataset for the paper: Generating Literal and Implied Subquestions to Fact-check Complex Claims
☆26Updated last year
Alternatives and similar repositories for subquestions-for-fact-checking
Users that are interested in subquestions-for-fact-checking are comparing it to the libraries listed below
Sorting:
- ☆17Updated 4 years ago
- Codes for ACL 2023 Paper "Fact-Checking Complex Claims with Program-Guided Reasoning"☆31Updated last year
- About Data and Codes for EMNLP 2023 System Demo Paper "QACHECK: A Demonstration System for Question-Guided Multi-Hop Fact-Checking"☆19Updated last year
- Dataset, metrics, and models for TACL 2023 paper MACSUM: Controllable Summarization with Mixed Attributes.☆34Updated last year
- Official implementation of the ACL 2023 paper: "Zero-shot Faithful Factual Error Correction"☆17Updated last year
- ☆58Updated 5 months ago
- Code for the ACL 2023 Paper "Fact-Checking Complex Claims with Program-Guided Reasoning"☆55Updated last year
- Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors (ACL 2023)☆25Updated last year
- Codes for "Benchmarking the Generation of Fact Checking Explanations"☆10Updated 9 months ago
- ☆19Updated last year
- Official implementation of "CONCRETE: Improving Cross-lingual Fact Checking with Cross-lingual Retrieval" (COLING'22)☆17Updated 2 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆41Updated last year
- We construct and introduce DIALFACT, a testing benchmark dataset crowd-annotated conversational claims, paired with pieces of evidence fr…☆41Updated 2 years ago
- [APSIPA ASC 2023] The official code of paper, "FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Au…☆17Updated last year
- ☆26Updated 2 years ago
- ☆20Updated 5 months ago
- Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"☆64Updated 3 years ago
- FRANK: Factuality Evaluation Benchmark☆55Updated 2 years ago
- ☆48Updated 2 years ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆47Updated last year
- First explanation metric (diagnostic report) for text generation evaluation☆61Updated 2 months ago
- ☆33Updated last year
- Data and models for Misinfo Reaction Frames paper.☆14Updated 11 months ago
- ☆15Updated 2 years ago
- ☆11Updated 7 months ago
- Codebase, data and models for the SummaC paper in TACL☆93Updated 3 months ago
- ☆38Updated last year
- Token-level Reference-free Hallucination Detection☆94Updated last year
- Extracting Cultural Commonsense Knowledge at Scale (WWW 2023)☆11Updated last year
- Implementation of the paper "FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations (NAACL 2022)"☆47Updated last year