OSU-NLP-Group / LLM-Knowledge-Conflict
[ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"
☆61Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for LLM-Knowledge-Conflict
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆83Updated 4 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆102Updated 2 months ago
- ☆66Updated 6 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆54Updated 10 months ago
- ☆54Updated 2 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆50Updated 7 months ago
- ☆37Updated 10 months ago
- Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆27Updated last month
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆49Updated 9 months ago
- ☆25Updated last year
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆57Updated last month
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆31Updated 4 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆30Updated 3 months ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆92Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆46Updated last year
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆68Updated 5 months ago
- ☆39Updated 7 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆60Updated 8 months ago
- ☆83Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆45Updated 7 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- ☆40Updated 11 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆76Updated 3 months ago
- Do Large Language Models Know What They Don’t Know?☆85Updated 2 weeks ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆36Updated last month
- Evaluating Mathematical Reasoning Beyond Accuracy☆37Updated 7 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆97Updated 7 months ago
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆62Updated 3 months ago
- ☆25Updated last month