In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)
☆62Mar 30, 2024Updated last year
Alternatives and similar repositories for Activation_Decoding
Users that are interested in Activation_Decoding are comparing it to the libraries listed below
Sorting:
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Sep 2, 2024Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Feb 27, 2024Updated 2 years ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆543Jan 17, 2025Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆64Dec 25, 2023Updated 2 years ago
- EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud☆23Mar 10, 2024Updated 2 years ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆57Oct 28, 2024Updated last year
- ☆19Aug 4, 2025Updated 7 months ago
- ☆78May 22, 2024Updated last year
- Collections of RLxLM experiments using minimal codes☆14Feb 17, 2025Updated last year
- Safety-J: Evaluating Safety with Critique☆16Jul 28, 2024Updated last year
- ☆20Nov 3, 2024Updated last year
- ☆13Jul 14, 2024Updated last year
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆87Dec 12, 2025Updated 3 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆39Jul 18, 2025Updated 8 months ago
- PAIR.withgoogle.com and friend's work on interpretability methods☆224Mar 14, 2026Updated last week
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆567Feb 12, 2024Updated 2 years ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆146Mar 14, 2024Updated 2 years ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆573Jan 28, 2025Updated last year
- ☆51Mar 2, 2024Updated 2 years ago
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆30Oct 18, 2023Updated 2 years ago
- contrastive decoding☆206Nov 14, 2022Updated 3 years ago
- ☆14Apr 29, 2025Updated 10 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆97Jan 29, 2024Updated 2 years ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆155Apr 30, 2024Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆66Apr 11, 2025Updated 11 months ago
- Evaluate the Quality of Critique☆36Jun 1, 2024Updated last year
- ☆25Jul 15, 2025Updated 8 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Jan 18, 2024Updated 2 years ago
- ☆18Aug 19, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆259Sep 24, 2024Updated last year
- This is the code repo for the paper <UTC-IE: A Unified Token-pair Classification Architecture for Information Extraction>☆15Aug 10, 2023Updated 2 years ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆387Oct 7, 2024Updated last year
- Code repository for "RL Grokking Recipe: How RL Unlocks and Transfers New Algorithms in LLMs""☆31Oct 12, 2025Updated 5 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Oct 30, 2025Updated 4 months ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆23Jan 1, 2025Updated last year
- Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"☆143Mar 26, 2024Updated last year
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆27Aug 9, 2025Updated 7 months ago
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆22Sep 21, 2025Updated 6 months ago
- ☆16Jun 19, 2023Updated 2 years ago