McGill-NLP / instruct-qaLinks
Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"
☆84Updated 9 months ago
Alternatives and similar repositories for instruct-qa
Users that are interested in instruct-qa are comparing it to the libraries listed below
Sorting:
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆128Updated last year
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated last year
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆64Updated last year
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆33Updated 5 months ago
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆80Updated 2 months ago
- ☆69Updated last year
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆44Updated 10 months ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆32Updated 11 months ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".