Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups
☆51Dec 23, 2024Updated last year
Alternatives and similar repositories for PopupAttack
Users that are interested in PopupAttack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆14Mar 9, 2025Updated last year
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- ☆12May 6, 2022Updated 3 years ago
- ☆14Feb 26, 2025Updated last year
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 10 months ago
- ☆16Sep 4, 2024Updated last year
- ☆23Oct 11, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆134Feb 19, 2025Updated last year
- OSWorld-Human: Benchmarking the Efficiency of Computer-Use Agents☆23Jan 6, 2026Updated 2 months ago
- ☆52Feb 8, 2025Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- This is the official repository of the paper "Atomic-to-Compositional Generalization for Mobile Agents with A New Benchmark and Schedulin…☆13Jul 27, 2025Updated 7 months ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 7 months ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- ☆23May 28, 2025Updated 9 months ago
- A data construction and evaluation framework to quantify privacy norm awareness of language models (LMs) and emerging privacy risk of LM …☆43Mar 4, 2025Updated last year
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Aug 24, 2022Updated 3 years ago
- ☆14Aug 18, 2022Updated 3 years ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- Dataset and evaluation benchmark for Privacy Leakage Evaluation of Autonomous Web Agents☆36Updated this week
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 7 months ago
- ☆27Jun 5, 2024Updated last year
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆60Jan 15, 2025Updated last year
- Agent Security Bench (ASB)☆201Oct 27, 2025Updated 4 months ago
- ☆13Oct 21, 2021Updated 4 years ago
- Accept by CVPR 2025 (highlight)☆24Jun 8, 2025Updated 9 months ago
- ☆19Jul 24, 2025Updated 7 months ago
- ☆15May 28, 2024Updated last year
- ☆11Dec 18, 2024Updated last year
- NeurIPS'24 - LLM Safety Landscape☆39Oct 21, 2025Updated 5 months ago
- ☆13Dec 28, 2024Updated last year
- ☆29Aug 31, 2025Updated 6 months ago
- ☆27Nov 9, 2022Updated 3 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Dec 1, 2024Updated last year
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago