mismayil / crowLinks
Benchmarking Commonsense Reasoning in Real-World Tasks
☆12Updated last year
Alternatives and similar repositories for crow
Users that are interested in crow are comparing it to the libraries listed below
Sorting:
- [EMNLP 2022] TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models☆73Updated last year
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆42Updated last year
- We construct and introduce DIALFACT, a testing benchmark dataset crowd-annotated conversational claims, paired with pieces of evidence fr…☆42Updated 2 years ago
- Mutual Information Predicts Hallucinations in Abstractive Summarization☆12Updated 2 years ago
- ☆13Updated last year
- The official implemetation of "Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks" (NAACL 2022).☆44Updated 2 years ago
- Code and models for the paper "Questions Are All You Need to Train a Dense Passage Retriever (TACL 2023)"☆63Updated 2 years ago
- Code base of In-Context Learning for Dialogue State tracking☆45Updated last year
- First explanation metric (diagnostic report) for text generation evaluation☆62Updated 6 months ago
- Continue Pretraining T5 on custom dataset based on available pretrained model checkpoints☆38Updated 4 years ago
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆16Updated 2 years ago
- ☆22Updated 3 years ago
- ☆33Updated 5 months ago
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- ☆17Updated 2 years ago
- ☆43Updated 2 years ago
- Findings of ACL'2023: Optimizing Test-Time Query Representations for Dense Retrieval☆30Updated last year
- ☆27Updated 2 years ago
- INSCIT: Information-Seeking Conversations with Mixed-Initiative Interactions☆16Updated 7 months ago
- ☆75Updated last year
- ☆51Updated 2 years ago
- FRANK: Factuality Evaluation Benchmark☆58Updated 2 years ago
- ☆39Updated 2 years ago
- The project page for "SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables"