git-disl / awesome_LLM-harmful-fine-tuning-papersView external linksLinks
A survey on harmful fine-tuning attack for large language model
☆232Jan 9, 2026Updated last month
Alternatives and similar repositories for awesome_LLM-harmful-fine-tuning-papers
Users that are interested in awesome_LLM-harmful-fine-tuning-papers are comparing it to the libraries listed below
Sorting:
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated last month
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆25Sep 10, 2024Updated last year
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆36Mar 22, 2025Updated 10 months ago
- ☆19Jun 21, 2025Updated 7 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆173Apr 23, 2025Updated 9 months ago
- ☆24Dec 8, 2024Updated last year
- ☆14Feb 26, 2025Updated 11 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆338Feb 23, 2024Updated last year
- code space of paper "Safety Layers in Aligned Large Language Models: The Key to LLM Security" (ICLR 2025)☆21Apr 26, 2025Updated 9 months ago
- ☆20Oct 28, 2025Updated 3 months ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,769Feb 1, 2026Updated 2 weeks ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- NeurIPS'24 - LLM Safety Landscape☆39Oct 21, 2025Updated 3 months ago
- ☆10Oct 31, 2022Updated 3 years ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆28Mar 11, 2025Updated 11 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Oct 30, 2023Updated 2 years ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,856Jan 24, 2026Updated 3 weeks ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆54Jul 21, 2025Updated 6 months ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆274Feb 2, 2026Updated last week
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 7 months ago
- ☆14Jan 3, 2025Updated last year
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆59Sep 5, 2025Updated 5 months ago
- Code for “SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation(ICLR 2025)”☆24Oct 23, 2025Updated 3 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆227Feb 3, 2026Updated last week
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Accepted by IJCAI-24 Survey Track☆231Aug 25, 2024Updated last year
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆75Mar 1, 2025Updated 11 months ago
- This is the official code for the paper "Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation"☆53Feb 2, 2025Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- ☆22Dec 22, 2024Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Mar 30, 2025Updated 10 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆87Aug 25, 2025Updated 5 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆292Jan 22, 2026Updated 3 weeks ago
- A resource repository for machine unlearning in large language models☆534Jan 6, 2026Updated last month
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆527Apr 4, 2025Updated 10 months ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆66Jun 9, 2025Updated 8 months ago
- ☆44Oct 1, 2024Updated last year