trestad / mitigating-reversal-curseLinks
Code for paper 'Are We Falling in a Middle-Intelligence Trap? An Analysis and Mitigation of the Reversal Curse'
☆14Updated last year
Alternatives and similar repositories for mitigating-reversal-curse
Users that are interested in mitigating-reversal-curse are comparing it to the libraries listed below
Sorting:
- ☆75Updated last year
- ☆47Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆118Updated last year
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆106Updated last month
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆164Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆115Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆137Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- ☆49Updated 10 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 8 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆62Updated 2 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 10 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆120Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- AbstainQA, ACL 2024☆28Updated 11 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆59Updated 9 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆70Updated last year
- ☆68Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 11 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆76Updated last year
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆82Updated 3 months ago
- ☆43Updated 6 months ago
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆172Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆43Updated 10 months ago
- Reproducing R1 for Code with Reliable Rewards☆11Updated 5 months ago
- The repo for In-context Autoencoder☆140Updated last year
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆27Updated 3 months ago