Li-Hyn / LLM_CatastrophicForgettingLinks
Code for LLM_Catastrophic_Forgetting via SAM.
☆10Updated last year
Alternatives and similar repositories for LLM_CatastrophicForgetting
Users that are interested in LLM_CatastrophicForgetting are comparing it to the libraries listed below
Sorting:
- ☆41Updated 9 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 9 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆66Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated last month
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 9 months ago
- ☆41Updated 3 months ago
- ☆60Updated last year
- ☆10Updated last year
- ☆23Updated 4 months ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆55Updated last year
- ☆11Updated last year
- ☆38Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆18Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated last year
- Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models☆29Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆80Updated 3 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆15Updated 6 months ago
- ☆22Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 7 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆21Updated last month
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆25Updated last year
- Codebase for decoding compressed trust.☆24Updated last year