WanliYoung / Collapse-in-Model-EditingLinks
Code and data repository for two papers (ACL & EMNLP 2024) on the topic of collapse in model editing.
☆10Updated 7 months ago
Alternatives and similar repositories for Collapse-in-Model-Editing
Users that are interested in Collapse-in-Model-Editing are comparing it to the libraries listed below
Sorting:
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models☆20Updated 2 weeks ago
- ☆13Updated 10 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 10 months ago
- ☆26Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆66Updated last year
- Evaluation of the Cross-Lingual Knowledge Alignment in LLMs☆9Updated last year
- ☆17Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆170Updated last year
- ☆11Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- ☆23Updated last year
- ☆29Updated last month
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆38Updated 8 months ago
- Code and data repository for "The Mirage of Model Editing: Revisiting Evaluation in the Wild"☆15Updated 2 weeks ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆127Updated 10 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- ☆33Updated last year
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆54Updated 3 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated 11 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆93Updated 5 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 9 months ago
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆14Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆149Updated last year
- ☆29Updated last year
- A Survey on Data Selection for Language Models☆241Updated 2 months ago
- ☆51Updated 2 years ago
- ☆64Updated 2 years ago
- Monitoring the health of ARR☆24Updated 2 months ago
- ☆36Updated 2 years ago