roywang021 / UMK
Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models
☆11Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for UMK
- ☆56Updated 3 months ago
- ☆37Updated 3 months ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆50Updated last year
- ☆12Updated 10 months ago
- ☆24Updated 5 months ago
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆32Updated 4 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆17Updated 7 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆40Updated 3 weeks ago
- Composite Backdoor Attacks Against Large Language Models☆9Updated 7 months ago
- ☆19Updated 4 months ago
- ☆21Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆15Updated 3 weeks ago
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆17Updated 2 months ago
- ☆17Updated 2 years ago
- A list of recent papers about adversarial learning☆74Updated this week
- ☆13Updated 3 months ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆133Updated last week
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆20Updated last year
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆24Updated 2 months ago
- ☆20Updated 4 months ago
- ☆17Updated 2 months ago
- The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consist…☆19Updated last year
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆76Updated last year
- A curated list of papers for the transferability of adversarial examples☆54Updated 4 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆22Updated 2 months ago
- ☆20Updated last year
- ☆22Updated 2 weeks ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆113Updated last week
- This is the repository for USENIX Security 2023 paper "Hard-label Black-box Universal Adversarial Patch Attack".☆14Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆183Updated 6 months ago