☆23May 28, 2025Updated 9 months ago
Alternatives and similar repositories for AdvAgent
Users that are interested in AdvAgent are comparing it to the libraries listed below
Sorting:
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated 11 months ago
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 8 months ago
- ☆34Mar 6, 2025Updated last year
- ☆29Aug 31, 2025Updated 6 months ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Aug 24, 2022Updated 3 years ago
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆40Feb 3, 2026Updated last month
- ☆32Sep 11, 2025Updated 6 months ago
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆27Mar 15, 2025Updated last year
- ☆13Oct 21, 2021Updated 4 years ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆28Sep 11, 2024Updated last year
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- ☆60Aug 11, 2024Updated last year
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆111Sep 27, 2024Updated last year
- ☆18Aug 15, 2022Updated 3 years ago
- Fine-tuning Quantized Neural Networks with Zeroth-order Optimization☆16Sep 17, 2025Updated 6 months ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆45May 17, 2023Updated 2 years ago
- ToolFuzz is a fuzzing framework designed to test your LLM Agent tools.☆37Jul 20, 2025Updated 8 months ago
- ☆48Sep 29, 2024Updated last year
- ☆20Feb 11, 2024Updated 2 years ago
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Aug 17, 2023Updated 2 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- Official eval code for ROVER: Benchmarking Reciprocal Cross-Modal Reasoning for Omnimodal Generation☆27Dec 12, 2025Updated 3 months ago
- This dataset contains results from all rounds of Adversarial Nibbler. This data includes adversarial prompts fed into public generative t…☆25Feb 3, 2025Updated last year
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17May 7, 2024Updated last year
- ☆18Jan 3, 2025Updated last year
- ☆11Nov 12, 2024Updated last year
- Code for Findings of ACL 2021 "Differential Privacy for Text Analytics via Natural Text Sanitization"☆32Mar 15, 2022Updated 4 years ago
- ☆18Nov 13, 2021Updated 4 years ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆84Sep 1, 2025Updated 6 months ago
- ☆23Sep 20, 2023Updated 2 years ago
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆27Dec 29, 2022Updated 3 years ago
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆22Sep 8, 2021Updated 4 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- A simple template for theoretical computer science assignments☆11Sep 6, 2023Updated 2 years ago
- Code for the paper "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models" (EMNLP 2021)☆25Oct 21, 2021Updated 4 years ago
- Official implementation of the paper "PromptSmooth: Certifying Robustness of Medical Vision-Language Models via Prompt Learning"☆24Apr 17, 2025Updated 11 months ago
- ☆44Feb 26, 2025Updated last year