microsoft / CoNLI_hallucinationLinks
CoNLI: a plug-and-play framework for ungrounded hallucination detection and reduction
☆33Updated 2 years ago
Alternatives and similar repositories for CoNLI_hallucination
Users that are interested in CoNLI_hallucination are comparing it to the libraries listed below
Sorting:
- ☆75Updated last year
- Contrastive Chain-of-Thought Prompting☆68Updated 2 years ago
- 🌲 Code for our EMNLP 2023 paper - 🎄 "Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Mode…☆54Updated 2 years ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆129Updated last year
- [NeurIPS 2023] Codebase for the paper: "Guiding Large Language Models with Directional Stimulus Prompting"☆113Updated 2 years ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆90Updated 2 years ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated last year
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated last year
- ☆51Updated last year
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- [IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection☆90Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆54Updated last year
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated 2 years ago
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆86Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆53Updated 6 months ago
- ☆52Updated 2 years ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆108Updated last year
- MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation☆28Updated last year
- About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate Representations" (EACL 2024). Do not hesitate t…☆74Updated 2 weeks ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆134Updated last year
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆83Updated 3 years ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆53Updated last year
- Repo for Llatrieval☆31Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆225Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated last year