technion-cs-nlp / hallucination-mitigationLinks
☆22Updated 6 months ago
Alternatives and similar repositories for hallucination-mitigation
Users that are interested in hallucination-mitigation are comparing it to the libraries listed below
Sorting:
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Evaluate the Quality of Critique☆36Updated last year
- ☆72Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated last month
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆33Updated 7 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- AbstainQA, ACL 2024☆27Updated 9 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 10 months ago
- Contrastive Chain-of-Thought Prompting☆64Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆29Updated last year
- ☆14Updated last year
- Resolving Knowledge Conflicts in Large Language Models, COLM 2024☆17Updated last month
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆45Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- ☆54Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆40Updated 2 years ago
- This repository contains data, code and models for contextual noncompliance.☆23Updated 11 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆30Updated 10 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated last year
- ☆30Updated 6 months ago
- Benchmarking Benchmark Leakage in Large Language Models☆54Updated last year
- Generating diverse counterfactual data for Natural Language Understanding tasks using Large Language Models (LLMs). The generator support…☆37Updated last year
- ☆45Updated 3 months ago
- [EMNLP 2024] A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners☆24Updated 7 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 5 months ago
- ☆25Updated last year
- Code/data for MARG (multi-agent review generation)☆44Updated 8 months ago