microsoft / CoNLI_hallucinationLinks
CoNLI: a plug-and-play framework for ungrounded hallucination detection and reduction
☆31Updated last year
Alternatives and similar repositories for CoNLI_hallucination
Users that are interested in CoNLI_hallucination are comparing it to the libraries listed below
Sorting:
- Contrastive Chain-of-Thought Prompting☆64Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆58Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 11 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆145Updated 8 months ago
- "TIGERScore: Towards Building Explainable Metric for All Text Generation Tasks" [TMLR 2024]☆31Updated 6 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated last year
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆85Updated 11 months ago
- ☆72Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆106Updated 5 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 5 months ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 10 months ago
- We have released the code and demo program required for LLM with self-verification☆60Updated last year
- Code for "Democratizing Reasoning Ability: Tailored Learning from Large Language Model", EMNLP 2023☆35Updated last year
- ☆44Updated 10 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆121Updated 7 months ago
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆48Updated 7 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- Code for "Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model", EMNLP Findings 20…☆28Updated last year
- 🌲 Code for our EMNLP 2023 paper - 🎄 "Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Mode…☆50Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Repo for Llatrieval☆30Updated 10 months ago
- ☆46Updated 11 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆161Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆74Updated last month
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year