homles11 / SaLoRALinks
Code for “SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation(ICLR 2025)”
☆14Updated 2 months ago
Alternatives and similar repositories for SaLoRA
Users that are interested in SaLoRA are comparing it to the libraries listed below
Sorting:
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆15Updated 8 months ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆16Updated 2 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆27Updated 2 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆43Updated 6 months ago
- ☆20Updated 5 months ago
- ☆11Updated 2 years ago
- ☆22Updated 9 months ago
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆11Updated 3 months ago
- [ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, …☆21Updated last week
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆42Updated 7 months ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated 7 months ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Updated 3 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆41Updated 8 months ago
- ☆16Updated last year
- Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)☆8Updated 3 years ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆15Updated 7 months ago
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆31Updated 5 months ago
- ☆21Updated 2 months ago
- SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆15Updated 2 months ago
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆21Updated 8 months ago
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)☆10Updated 10 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 6 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆19Updated this week
- [ICLR 2025] Understanding and Enhancing Safety Mechanisms of LLMs via Safety-Specific Neuron☆14Updated last month
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆30Updated last year
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆70Updated last year
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆14Updated 5 months ago
- ☆34Updated 5 months ago
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆14Updated 2 years ago