[ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
☆43Feb 3, 2026Updated 2 months ago
Alternatives and similar repositories for XTransferBench
Users that are interested in XTransferBench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆30Jun 12, 2025Updated 9 months ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆55Nov 5, 2025Updated 5 months ago
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆37Jul 10, 2025Updated 9 months ago
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 9 months ago
- Code for the paper "AsFT: Anchoring Safety During LLM Fune-Tuning Within Narrow Safety Basin".☆36Jul 10, 2025Updated 9 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆21Mar 17, 2025Updated last year
- A toolbox for benchmarking Multimodal LLM Agents trustworthiness across truthfulness, controllability, safety and privacy dimensions thro…☆64Jan 9, 2026Updated 3 months ago
- ☆12May 6, 2022Updated 3 years ago
- ☆25May 28, 2025Updated 10 months ago
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆91Feb 3, 2026Updated 2 months ago
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated last year
- ☆47Apr 7, 2025Updated last year
- (CVPR 2024) "Unsegment Anything by Simulating Deformation"☆29May 27, 2024Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 10 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 11 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- ☆59Jun 5, 2024Updated last year
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆70Aug 7, 2025Updated 8 months ago
- [NDSS'25] The official implementation of safety misalignment.☆18Jan 8, 2025Updated last year
- ☆33Feb 17, 2026Updated last month
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆226Dec 22, 2024Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model and Agent Safety☆255Mar 18, 2026Updated 3 weeks ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- The implementation of our NeurIPS 2024 paper "DarkSAM: Fooling Segment Anything Model to Segment Nothing".☆13Nov 4, 2024Updated last year
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆287Mar 13, 2026Updated 3 weeks ago
- The implementation of our ACM MM 2023 paper "AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning"☆99Aug 25, 2023Updated 2 years ago
- ☆60Aug 11, 2024Updated last year
- This repository contains the official code for the paper: "Prompt Injection: Parameterization of Fixed Inputs"☆32Sep 13, 2024Updated last year
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 10 months ago
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆22Aug 28, 2025Updated 7 months ago
- ☆14Jul 17, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆11May 17, 2021Updated 4 years ago
- ComfyUI version of WithAnyone☆24Dec 18, 2025Updated 3 months ago
- Official codes for FPR (Accepted by CVPR2025)☆14Mar 19, 2025Updated last year
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆34May 26, 2023Updated 2 years ago
- ☆81Jul 23, 2024Updated last year
- ☆21Mar 18, 2026Updated 3 weeks ago
- ⚔️ Blades: A Unified Benchmark Suite for Attacks and Defenses in Federated Learning☆156Feb 16, 2025Updated last year