jiaxiaojunQAQ / FOA-AttackView external linksLinks
Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)
☆47Nov 5, 2025Updated 3 months ago
Alternatives and similar repositories for FOA-Attack
Users that are interested in FOA-Attack are comparing it to the libraries listed below
Sorting:
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆57Jan 23, 2026Updated 3 weeks ago
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆87Feb 3, 2026Updated last week
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 6 months ago
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆37Feb 3, 2026Updated last week
- ☆57Jun 5, 2024Updated last year
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆25May 29, 2024Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆227Dec 22, 2024Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 5 months ago
- ☆14Feb 26, 2025Updated 11 months ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 8 months ago
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆43Apr 21, 2024Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆490Jan 27, 2026Updated 2 weeks ago
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- ☆72Mar 30, 2025Updated 10 months ago
- ☆13Dec 8, 2022Updated 3 years ago
- Accept by CVPR 2025 (highlight)☆22Jun 8, 2025Updated 8 months ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆65Mar 20, 2023Updated 2 years ago
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆31Jun 24, 2025Updated 7 months ago
- [CVPR 2025] Official implementation for JOOD "Playing the Fool: Jailbreaking LLMs and Multimodal LLMs with Out-of-Distribution Strategy"☆20Jun 11, 2025Updated 8 months ago
- ☆48Apr 7, 2025Updated 10 months ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Jul 3, 2021Updated 4 years ago
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆29Jun 12, 2025Updated 8 months ago
- An implementation of the penalty-based bilevel gradient descent (PBGD) algorithm and the iterative differentiation (ITD/RHG) methods.☆19Feb 13, 2023Updated 3 years ago
- Code for the CVPR 2020 article "Adversarial Vertex mixup: Toward Better Adversarially Robust Generalization"☆13Jul 13, 2020Updated 5 years ago
- ☆20Jan 15, 2024Updated 2 years ago
- Data-Efficient Backdoor Attacks☆20Jun 15, 2022Updated 3 years ago
- Pytorch Dataloader - GTSRB (German Traffic Sign Recognition)☆20Apr 16, 2019Updated 6 years ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆66Aug 7, 2025Updated 6 months ago
- ☆23Apr 7, 2025Updated 10 months ago
- Implementation of An Invisible Black-box Backdoor Attack through Frequency Domain☆20Sep 29, 2022Updated 3 years ago
- ☆20May 6, 2022Updated 3 years ago
- ☆58Aug 11, 2024Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Feb 10, 2023Updated 3 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆23Oct 13, 2021Updated 4 years ago
- [ICCVW 2025 (Oral)] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Models☆28Oct 20, 2025Updated 3 months ago