baixianghuang / HalluEditBenchLinks
Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)
☆27Updated 5 months ago
Alternatives and similar repositories for HalluEditBench
Users that are interested in HalluEditBench are comparing it to the libraries listed below
Sorting:
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆60Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- ☆24Updated 8 months ago
- ☆30Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆162Updated last month
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correla…☆47Updated 7 months ago
- AbstainQA, ACL 2024☆28Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆87Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆53Updated 2 weeks ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆65Updated last year
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆105Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆140Updated last year
- ☆36Updated last year
- Reasoning Activation in LLMs via Small Model Transfer (NeurIPS 2025)☆20Updated 2 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- [ACL2025 Best Paper] Language Models Resist Alignment☆40Updated 6 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆95Updated 7 months ago
- ☆43Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 9 months ago
- ☆51Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Updated 11 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆49Updated 3 months ago
- ☆41Updated 2 years ago
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆39Updated 4 months ago
- LLM Unlearning☆178Updated 2 years ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆150Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆64Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated last year