technion-cs-nlp / hallucination-mitigation
☆22Updated 4 months ago
Alternatives and similar repositories for hallucination-mitigation
Users that are interested in hallucination-mitigation are comparing it to the libraries listed below
Sorting:
- Evaluate the Quality of Critique☆35Updated 11 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- AbstainQA, ACL 2024☆25Updated 7 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆44Updated 8 months ago
- ☆69Updated last year
- ☆29Updated 4 months ago
- ACL24☆9Updated 11 months ago
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆33Updated 5 months ago
- Official codebase for permutation self-consistency.☆18Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆61Updated 10 months ago
- ☆14Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆21Updated 8 months ago
- ☆28Updated last year
- Benchmarking Benchmark Leakage in Large Language Models☆51Updated 11 months ago
- ☆41Updated last year
- Generating diverse counterfactual data for Natural Language Understanding tasks using Large Language Models (LLMs). The generator support…☆36Updated last year
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆44Updated 10 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆57Updated 5 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆25Updated last month
- Tasks for describing differences between text distributions.☆16Updated 9 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆32Updated 11 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆46Updated 5 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated 3 months ago
- ☆42Updated last month
- The source code for running LLMs on the AAAR-1.0 benchmark.☆16Updated last month