HITsz-TMG / Ext-SubLinks
Official implementation of our paper "Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation". A model merge method for deficiency unlearning, compitable with huggingface peft (LoRA).
☆11Updated 10 months ago
Alternatives and similar repositories for Ext-Sub
Users that are interested in Ext-Sub are comparing it to the libraries listed below
Sorting:
- ☆38Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 10 months ago
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆54Updated 4 months ago
- ☆26Updated last year
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated 11 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆25Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 11 months ago
- ☆17Updated last year
- ☆47Updated last year
- ☆23Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆68Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆130Updated 10 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆35Updated 7 months ago
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21Updated 2 years ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- ☆17Updated last year
- [EMNLP 2023] Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models☆25Updated last year
- ☆41Updated 10 months ago
- LLM Unlearning☆172Updated last year
- ☆13Updated 11 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- Implementation of "ACL'24: When Do LLMs Need Retrieval Augmentation? Mitigating LLMs’ Overconfidence Helps Retrieval Augmentation"☆25Updated last year
- ☆11Updated 5 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆53Updated 11 months ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆30Updated 3 years ago
- ☆24Updated 2 years ago
- ☆34Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- [ACL'25 Main] SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence! | 让你的LLM更好地利用上下文文档:一个基于注意力的简单方案☆21Updated 5 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 10 months ago