yuxiaw / Factcheck-GPTLinks
Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.
☆106Updated last year
Alternatives and similar repositories for Factcheck-GPT
Users that are interested in Factcheck-GPT are comparing it to the libraries listed below
Sorting:
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆135Updated last year
- [IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection☆89Updated last year
- RARR: Researching and Revising What Language Models Say, Using Language Models☆49Updated 2 years ago
- Companion code for FanOutQA: Multi-Hop, Multi-Document Question Answering for Large Language Models (ACL 2024)☆55Updated last month
- ☆189Updated 3 months ago
- [Preprint] Learning to Filter Context for Retrieval-Augmented Generaton☆198Updated last year
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- ☆52Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆205Updated 10 months ago
- ☆74Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆165Updated 2 years ago
- ☆79Updated last year
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆138Updated last year
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆77Updated last year
- ☆68Updated 2 years ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆223Updated 11 months ago
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆169Updated last month
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆154Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆126Updated 2 years ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆48Updated last year
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆60Updated 9 months ago
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆204Updated 10 months ago
- A Human-LLM Collaborative Dataset for Generative Information-seeking with Attribution☆35Updated 2 years ago
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆163Updated last year
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆81Updated 3 years ago
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆51Updated 4 months ago