Code for Voice Jailbreak Attacks Against GPT-4o.
☆37May 31, 2024Updated last year
Alternatives and similar repositories for VoiceJailbreakAttack
Users that are interested in VoiceJailbreakAttack are comparing it to the libraries listed below
Sorting:
- ☆14Mar 9, 2025Updated last year
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Mar 1, 2025Updated last year
- ☆39Feb 6, 2026Updated last month
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 10 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆45Apr 21, 2024Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- Audio Jailbreak: An Open Comprehensive Benchmark for Jailbreaking Large Audio-Language Models☆31Oct 6, 2025Updated 5 months ago
- Evaluating Durability: Benchmark Insights into Multimodal Watermarking☆12Jun 7, 2024Updated last year
- ☆30Mar 13, 2026Updated last week
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Nov 27, 2023Updated 2 years ago
- DeerSheep0314 / Re4-Learning-to-Re-contrast-Re-attend-Re-construct-for-Multi-interest-Recommendation☆23Aug 4, 2022Updated 3 years ago
- ☆25Jan 17, 2025Updated last year
- [ICLR'26 Oral] RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS Environments☆41Feb 9, 2026Updated last month
- ☆60Jun 5, 2024Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆112Apr 15, 2024Updated last year
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆51Jan 11, 2025Updated last year
- 北京邮电大学信通院C++上机题☆14Feb 20, 2021Updated 5 years ago
- Watermarking LLM papers up-to-date☆11Dec 17, 2023Updated 2 years ago
- ☆76Mar 30, 2025Updated 11 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆111Sep 27, 2024Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆36Oct 23, 2024Updated last year
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆73Feb 9, 2026Updated last month
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆28Sep 11, 2024Updated last year
- ☆13Jul 12, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆134Feb 19, 2025Updated last year
- Measuring and Controlling Persona Drift in Language Model Dialogs☆22Feb 26, 2024Updated 2 years ago
- ☆128Nov 13, 2023Updated 2 years ago
- ☆78Dec 19, 2024Updated last year
- ☆23Jun 22, 2025Updated 9 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆56Aug 17, 2024Updated last year
- Code to conduct an embedding attack on LLMs☆31Jan 10, 2025Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆196Jun 26, 2025Updated 8 months ago
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆116Jun 13, 2024Updated last year
- Open source data and code of the MGNM☆25Sep 15, 2022Updated 3 years ago
- ☆27May 30, 2025Updated 9 months ago
- Code for the paper "Exploring Backdoor Vulnerabilities of Chat Models"☆19Apr 13, 2024Updated last year
- ☆21May 23, 2025Updated 9 months ago