salesforce / factualNLGLinks
Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"
☆59Updated 6 months ago
Alternatives and similar repositories for factualNLG
Users that are interested in factualNLG are comparing it to the libraries listed below
Sorting:
- A unified benchmark for math reasoning☆88Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- Repository for Decomposed Prompting☆93Updated last year
- Repo for "On Learning to Summarize with Large Language Models as References"☆43Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 11 months ago
- ☆48Updated last year
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆77Updated 2 years ago
- ☆82Updated 2 years ago
- Companion repo for "Evaluating Verifiability in Generative Search Engines".☆83Updated 2 years ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate Representations" (EACL 2024). Do not hesitate t…☆70Updated last year
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆81Updated last month
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆132Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆181Updated 2 years ago
- ☆138Updated 7 months ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated last year
- Retrieval as Attention☆84Updated 2 years ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆162Updated last year
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆42Updated last year
- ☆48Updated 2 years ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated last month
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Supporting code for ReCEval paper☆29Updated 11 months ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆197Updated 2 years ago
- Source codes and datasets for How well do Large Language Models perform in Arithmetic tasks?☆57Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆139Updated 3 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆107Updated last year