Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep
☆178Apr 23, 2025Updated 11 months ago
Alternatives and similar repositories for shallow-vs-deep-alignment
Users that are interested in shallow-vs-deep-alignment are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Mar 30, 2025Updated last year
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 9 months ago
- A survey on harmful fine-tuning attack for large language model (ACM CSUR)☆238Feb 25, 2026Updated last month
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated 2 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆324May 13, 2025Updated 10 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆36Mar 22, 2025Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆259Sep 24, 2024Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆271May 13, 2024Updated last year
- ☆24Dec 8, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆381Jan 23, 2025Updated last year
- NeurIPS'24 - LLM Safety Landscape☆39Oct 21, 2025Updated 5 months ago
- ☆39May 17, 2025Updated 10 months ago
- Accepted by IJCAI-24 Survey Track☆229Aug 25, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆564Apr 4, 2025Updated last year
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Aug 17, 2025Updated 7 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 10 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆370Jun 13, 2025Updated 9 months ago
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆29Apr 2, 2025Updated last year
- ☆23Jun 13, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆915Aug 16, 2024Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆67Jun 9, 2025Updated 10 months ago
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆28Dec 21, 2025Updated 3 months ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- ☆18Apr 7, 2025Updated last year
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 10 months ago
- ☆48Sep 29, 2024Updated last year
- [NeurIPS 2024] Large Language Model Unlearning via Embedding-Corrupted Prompts☆39Sep 26, 2024Updated last year
- ☆52Oct 23, 2023Updated 2 years ago
- Code to break Llama Guard☆32Dec 7, 2023Updated 2 years ago
- ☆27Jun 5, 2024Updated last year
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆81Jun 6, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆23Apr 5, 2023Updated 3 years ago
- ☆20May 14, 2025Updated 10 months ago
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆79Mar 1, 2025Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆57Nov 13, 2023Updated 2 years ago
- Data and code for the paper: Finding Safety Neurons in Large Language Models☆25Jan 29, 2026Updated 2 months ago
- ☆39May 21, 2025Updated 10 months ago