ictnlp / TruthXLinks
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
☆144Updated last year
Alternatives and similar repositories for TruthX
Users that are interested in TruthX are comparing it to the libraries listed below
Sorting:
- The official repository of our survey paper: "Towards a Unified View of Preference Learning for Large Language Models: A Survey"☆189Updated last year
- The repo for In-context Autoencoder☆165Updated last year
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆61Updated 5 months ago
- ☆90Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆168Updated last year
- [ACL 2024] RA-ISF: Learning to Answer and Understand from Retrieval Augmentation via Iterative Self-Feedback.☆186Updated last year
- Codebase for LLM Textual Hallucination Benchmark☆72Updated 9 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Updated last year
- LLM hallucination paper list☆331Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- ☆25Updated 2 years ago
- An awesome repository & A comprehensive survey on interpretability of LLM attention heads.☆393Updated 10 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆98Updated 3 months ago
- ☆56Updated 3 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆88Updated last year
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Updated last year
- [TACL 2024] MAPS enables LLMs🤖 to mimic the human😁 translation process.☆145Updated last year
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆153Updated 5 months ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆51Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆151Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆106Updated last year
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆62Updated 8 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆82Updated last year
- ☆69Updated 10 months ago
- ☆12Updated 10 months ago
- Official code for our paper "Reasoning Models Hallucinate More: Factuality-Aware Reinforcement Learning for Large Reasoning Models"☆20Updated 2 months ago
- ☆180Updated this week
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets and…☆64Updated 8 months ago