yunqing-me / AttackVLM
[NeurIPS-2023] Annual Conference on Neural Information Processing Systems
☆184Updated 2 months ago
Alternatives and similar repositories for AttackVLM:
Users that are interested in AttackVLM are comparing it to the libraries listed below
- ☆92Updated last year
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆132Updated this week
- A package that achieves 95%+ transfer attack success rate against GPT-4☆16Updated 4 months ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆40Updated 8 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆50Updated 10 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆42Updated last month
- ☆37Updated 2 months ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆71Updated this week
- Accepted by ECCV 2024☆106Updated 4 months ago
- ☆41Updated 6 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆47Updated 4 months ago
- ☆40Updated last year
- ☆31Updated 7 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆24Updated 4 months ago
- ☆66Updated 7 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆29Updated last year
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆57Updated last month
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆203Updated 9 months ago
- ☆27Updated last month
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆75Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆227Updated this week
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆54Updated last year
- ☆30Updated 3 months ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆57Updated 6 months ago
- ☆31Updated 8 months ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆44Updated last year
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆130Updated this week
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆34Updated 10 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆45Updated 2 months ago
- Accepted by IJCAI-24 Survey Track☆194Updated 6 months ago