AbhilashaRavichander / HALoGENLinks
Code for the paper "HALoGEN: Fantastic LLM Hallucinations and Where To Find Them"
☆23Updated 7 months ago
Alternatives and similar repositories for HALoGEN
Users that are interested in HALoGEN are comparing it to the libraries listed below
Sorting:
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- ☆44Updated last year
- [ACL 2025 Main] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆40Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆63Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆118Updated last year
- ☆77Updated last year
- ☆22Updated last year
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆16Updated 2 years ago
- ☆29Updated last year
- ☆89Updated last year
- ☆103Updated 2 years ago
- ☆83Updated last month
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆26Updated 10 months ago
- Exploring the Limitations of Large Language Models on Multi-Hop Queries☆29Updated 10 months ago
- ☆57Updated 2 years ago
- Materials for "Quantifying the Plausibility of Context Reliance in Neural Machine Translation" at ICLR'24 🐑 🐑☆15Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆82Updated last year
- ☆17Updated 8 months ago
- This repository contains data, code and models for contextual noncompliance.☆24Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- NAACL 2024: SeaEval for Multilingual Foundation Models: From Cross-Lingual Alignment to Cultural Reasoning☆26Updated 10 months ago
- ☆47Updated 3 months ago
- ☆41Updated 2 years ago
- ☆75Updated 2 years ago
- LoFiT: Localized Fine-tuning on LLM Representations☆44Updated 11 months ago
- ☆35Updated 4 years ago
- Augmenting Statistical Models with Natural Language Parameters☆29Updated last year
- contrastive decoding☆205Updated 3 years ago