McGill-NLP / instruct-qa
Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"
☆78Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for instruct-qa
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆122Updated 7 months ago
- About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate Representations" (EACL 2024). Do not hesitate t…☆66Updated 8 months ago
- Token-level Reference-free Hallucination Detection☆92Updated last year
- Companion code for FanOutQA: Multi-Hop, Multi-Document Question Answering for Large Language Models (ACL 2024)☆35Updated 3 weeks ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆58Updated 6 months ago
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆52Updated last year
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆56Updated 2 weeks ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆39Updated 4 months ago
- Code, datasets, and checkpoints for the paper "Improving Passage Retrieval with Zero-Shot Question Generation (EMNLP 2022)"☆93Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆107Updated last year
- Detect hallucinated tokens for conditional sequence generation.☆63Updated 2 years ago
- First explanation metric (diagnostic report) for text generation evaluation☆60Updated 3 months ago
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆71Updated 2 weeks ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆48Updated 8 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆114Updated last month
- ☆120Updated 2 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆84Updated 3 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆54Updated 10 months ago
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆71Updated 10 months ago
- ☆166Updated last year
- Repo for "On Learning to Summarize with Large Language Models as References"☆42Updated last year
- Apps built using Inspired Cognition's Critique.☆58Updated last year
- ☆66Updated 9 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆36Updated last month
- ☆80Updated last year
- ☆33Updated last year
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆28Updated 4 months ago
- A Human-LLM Collaborative Dataset for Generative Information-seeking with Attribution☆30Updated last year
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆39Updated 10 months ago
- ☆44Updated 2 months ago