[ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion
☆59Oct 1, 2025Updated 5 months ago
Alternatives and similar repositories for CodeAttack
Users that are interested in CodeAttack are comparing it to the libraries listed below
Sorting:
- ☆124Feb 3, 2025Updated last year
- ☆11Oct 25, 2024Updated last year
- Adversarial Attack for Pre-trained Code Models☆10Jul 19, 2022Updated 3 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆72May 22, 2025Updated 9 months ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆57Jul 21, 2025Updated 7 months ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆11Oct 29, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- ☆704Jul 2, 2025Updated 8 months ago
- [ICSE'25] Aligning the Objective of LLM-based Program Repair☆23Mar 8, 2025Updated last year
- ☆21Jul 26, 2025Updated 7 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- ☆19May 14, 2025Updated 10 months ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆25Nov 29, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆826Mar 27, 2025Updated 11 months ago
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- ☆18Apr 7, 2025Updated 11 months ago
- Diagnostic Framework for LLMs and MLLMs☆34Mar 2, 2026Updated 2 weeks ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆34Aug 16, 2024Updated last year
- ☆129Jul 7, 2025Updated 8 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- ☆33Jun 24, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 10 months ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 9 months ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆168May 2, 2025Updated 10 months ago
- Code for ICML2019 Paper "On the Convergence and Robustness of Adversarial Training"☆34Apr 28, 2020Updated 5 years ago
- ☆14Feb 26, 2025Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- ☆196Nov 26, 2023Updated 2 years ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆75Jan 16, 2025Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Mar 13, 2026Updated last week
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆546Apr 4, 2025Updated 11 months ago
- ☆21Mar 17, 2025Updated last year
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs☆12Nov 7, 2024Updated last year
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- Improving Alignment and Robustness with Circuit Breakers☆259Sep 24, 2024Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆172Feb 20, 2024Updated 2 years ago
- Security Attacks on LLM-based Code Completion Tools (AAAI 2025)☆21Dec 31, 2025Updated 2 months ago