katiekang1998 / llm_hallucinationsLinks
☆17Updated last year
Alternatives and similar repositories for llm_hallucinations
Users that are interested in llm_hallucinations are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 11 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆53Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- ☆29Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- ☆75Updated last year
- AbstainQA, ACL 2024☆28Updated 10 months ago
- ☆41Updated last year
- ☆31Updated 2 years ago
- ☆47Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- ☆88Updated 2 years ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆77Updated 2 years ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 9 months ago
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization…☆16Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆16Updated 2 years ago
- ☆43Updated 5 months ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated 2 years ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- ☆33Updated 2 years ago
- Analyzing LLM Alignment via Token distribution shift☆16Updated last year
- ☆81Updated 8 months ago
- ☆87Updated 2 years ago
- ☆44Updated 11 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year