intuit / sac3Links
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
☆38Updated 11 months ago
Alternatives and similar repositories for sac3
Users that are interested in sac3 are comparing it to the libraries listed below
Sorting:
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆87Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆165Updated last year
- RARR: Researching and Revising What Language Models Say, Using Language Models☆49Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆97Updated 2 years ago
- ☆43Updated 2 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆109Updated 2 years ago
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆81Updated last year
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆26Updated 9 months ago
- ☆82Updated 3 weeks ago
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago
- ☆116Updated last year
- Repository for Decomposed Prompting☆95Updated 2 years ago
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆76Updated last year
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆102Updated 2 years ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆136Updated last year
- ☆189Updated 5 months ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆61Updated 11 months ago
- A Survey of Hallucination in Large Foundation Models☆55Updated last year
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆154Updated 3 months ago
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆148Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated last year
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆82Updated 2 weeks ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated 2 years ago
- ☆88Updated 2 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆81Updated 3 years ago
- ☆47Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆65Updated 2 years ago