AI-safety-book / AI-safety-book.github.io
☆12Updated last month
Alternatives and similar repositories for AI-safety-book.github.io:
Users that are interested in AI-safety-book.github.io are comparing it to the libraries listed below
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆24Updated 3 months ago
- ☆36Updated 9 months ago
- ☆43Updated 7 months ago
- ☆46Updated 3 months ago
- ☆69Updated 8 months ago
- A package that achieves 95%+ transfer attack success rate against GPT-4☆17Updated 5 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆19Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 7 months ago
- ☆19Updated last month
- ☆24Updated 5 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆19Updated 5 months ago
- ☆32Updated 4 months ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆20Updated 6 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆32Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆32Updated 4 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆50Updated last week
- Watermarking LLM papers up-to-date☆13Updated last year
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆18Updated last month
- ☆20Updated 6 months ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆45Updated 9 months ago
- ☆40Updated 3 months ago
- ☆15Updated 2 years ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆14Updated 5 months ago
- Composite Backdoor Attacks Against Large Language Models☆13Updated 11 months ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆30Updated 7 months ago
- ☆28Updated 6 months ago
- ☆31Updated 8 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆52Updated 8 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 7 months ago
- ☆19Updated 11 months ago