☆24Dec 8, 2024Updated last year
Alternatives and similar repositories for Backdoor-Enhanced-Alignment
Users that are interested in Backdoor-Enhanced-Alignment are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 8 months ago
- Code for “SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation(ICLR 2025)”☆24Oct 23, 2025Updated 4 months ago
- ☆48Sep 29, 2024Updated last year
- ☆19Jun 21, 2025Updated 8 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆341Feb 23, 2024Updated 2 years ago
- Code for experiments on self-prediction as a way to measure introspection in LLMs☆16Dec 10, 2024Updated last year
- Code and dataset for the paper: "Can Editing LLMs Inject Harm?"☆21Dec 26, 2025Updated 2 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated last month
- ☆19Jul 31, 2025Updated 7 months ago
- Official Repo of Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents☆59Oct 28, 2025Updated 4 months ago
- A survey on harmful fine-tuning attack for large language model☆232Updated this week
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆174Apr 23, 2025Updated 10 months ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Jul 16, 2021Updated 4 years ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆36Mar 22, 2025Updated 11 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Mar 30, 2025Updated 11 months ago
- ☆14Jun 18, 2023Updated 2 years ago
- ☆16Feb 8, 2024Updated 2 years ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 5 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- ☆16Feb 23, 2025Updated last year
- Code for paper "Byzantine-Resilient Decentralized Stochastic Optimization with Robust Aggregation Rules"☆20Apr 19, 2024Updated last year
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆61Sep 5, 2025Updated 5 months ago
- Pytorch implementations of Client-Customized Adaptation for Parameter-Efficient Federated Learning (Findings of ACL: ACL 2023)☆17Oct 9, 2023Updated 2 years ago
- Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning☆26Oct 4, 2025Updated 4 months ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆48Apr 27, 2022Updated 3 years ago
- ☆19Jun 5, 2023Updated 2 years ago
- ☆23Jan 17, 2025Updated last year
- Fingerprint large language models☆49Jul 11, 2024Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆19Mar 9, 2025Updated 11 months ago
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Jun 22, 2022Updated 3 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆47Feb 28, 2023Updated 3 years ago
- Code for reproducing the experiments of the ICML 2019 paper "Robust Learning from Untrusted Sources"☆18Jul 5, 2019Updated 6 years ago
- Code to replicate the Representation Noising paper and tools for evaluating defences against harmful fine-tuning☆23Dec 12, 2024Updated last year
- This is the official code for the paper "Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation"☆54Feb 2, 2025Updated last year
- ☆20Oct 5, 2023Updated 2 years ago