khuangaf / CONCRETELinks
Official implementation of "CONCRETE: Improving Cross-lingual Fact Checking with Cross-lingual Retrieval" (COLING'22)
☆16Updated 3 years ago
Alternatives and similar repositories for CONCRETE
Users that are interested in CONCRETE are comparing it to the libraries listed below
Sorting:
- ☆23Updated 2 years ago
- Code and dataset for the paper: Generating Literal and Implied Subquestions to Fact-check Complex Claims☆29Updated 2 years ago
- ☆70Updated last year
- Code for the ACL 2023 Paper "Fact-Checking Complex Claims with Program-Guided Reasoning"☆57Updated 2 years ago
- ☆27Updated 3 years ago
- A standardized, fair, and reproducible benchmark for evaluating event extraction approaches☆57Updated 7 months ago
- Official implementation of the ACL 2023 paper: "Zero-shot Faithful Factual Error Correction"☆17Updated 2 years ago
- Codes for ACL 2023 Paper "Fact-Checking Complex Claims with Program-Guided Reasoning"☆31Updated 2 years ago
- About Data and Codes for EMNLP 2023 System Demo Paper "QACHECK: A Demonstration System for Question-Guided Multi-Hop Fact-Checking"☆19Updated last year
- ☆18Updated 5 years ago
- ☆16Updated last year
- ☆11Updated 2 years ago
- Codebase, data and models for the SummaC paper in TACL☆105Updated 10 months ago
- templates and other documents regarding responsible NLP research☆70Updated 2 years ago
- ☆45Updated 3 years ago
- Code for the paper "Open Domain Question Answering with A Unified Knowledge Interface" (ACL 2022)☆56Updated 2 years ago
- Codes for "Benchmarking the Generation of Fact Checking Explanations"☆10Updated last year
- Data and models for Misinfo Reaction Frames paper.☆14Updated last year
- Code for our paper "Graph Pre-training for AMR Parsing and Generation" in ACL2022☆103Updated last year
- Dataset, metrics, and models for TACL 2023 paper MACSUM: Controllable Summarization with Mixed Attributes.☆34Updated 2 years ago
- ☆15Updated 3 years ago
- Source code and dataset for EMNLP 2022 paper "MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subev…☆89Updated 2 years ago
- ☆11Updated last year
- FRANK: Factuality Evaluation Benchmark☆59Updated 3 years ago
- Data and code for "A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization" (ACL 2020)☆48Updated 2 years ago
- Code for EMNLP 2021 paper "CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization"☆46Updated 3 years ago
- Codes for our paper "CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation" (ACL 2022)☆33Updated 3 years ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆48Updated 2 years ago
- ☆146Updated 3 years ago
- ☆32Updated last year