jlko / long_hallucinationsLinks
Codebase for reproducing the experiments of the semantic uncertainty paper (paragraph-length experiments).
☆69Updated last year
Alternatives and similar repositories for long_hallucinations
Users that are interested in long_hallucinations are comparing it to the libraries listed below
Sorting:
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆76Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆118Updated last year
- ☆37Updated 8 months ago
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆102Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆115Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆119Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 9 months ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆122Updated 7 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆157Updated 7 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆125Updated 2 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- ☆47Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆43Updated 10 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆137Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆73Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 9 months ago
- [ICLR'25] DataGen: Unified Synthetic Dataset Generation via Large Language Models☆64Updated 6 months ago
- A Survey of Attributions for Large Language Models☆213Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆186Updated 7 months ago
- A Survey of Hallucination in Large Foundation Models☆54Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆70Updated last year
- ☆75Updated last year
- [IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection☆89Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆108Updated 2 months ago
- A Survey on Data Selection for Language Models☆247Updated 4 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 6 months ago
- Code implementation of synthetic continued pretraining☆129Updated 8 months ago