wooozihui / jailbreakfunction
Official code of the paper "The Dark Side of Function Calling: Pathways to Jailbreaking Large Language Models"
☆39Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for jailbreakfunction
- JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and further assess …☆35Updated 3 months ago
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆119Updated 8 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆60Updated last month
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆46Updated 2 months ago
- [FCS'24] LVLM Safety paper☆14Updated 7 months ago
- ☆37Updated 4 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆53Updated 3 weeks ago
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆24Updated 2 months ago
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆55Updated 3 months ago
- A collection of automated evaluators for assessing jailbreak attempts.☆71Updated 4 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆52Updated 2 weeks ago
- Multilingual safety benchmark for Large Language Models☆22Updated 2 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆61Updated 8 months ago
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆12Updated 3 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆13Updated 4 months ago
- ☆153Updated 11 months ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆13Updated 6 months ago
- Weak-to-Strong Jailbreaking on Large Language Models☆64Updated 8 months ago
- Code and dataset for the paper: "Can Editing LLMs Inject Harm?"☆15Updated last month
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆18Updated last week
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆27Updated 2 weeks ago
- Attack to induce LLMs within hallucinations☆104Updated 5 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆30Updated 2 months ago
- Official Repo of ACL 2024 Paper `ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs`☆42Updated last week
- Repo for the research paper "Aligning LLMs to Be Robust Against Prompt Injection"☆18Updated last week
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆21Updated 4 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆44Updated last week
- Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆85Updated 6 months ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆38Updated 3 weeks ago
- Agent Security Bench (ASB)☆37Updated this week
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆41Updated last month