☆37Oct 2, 2024Updated last year
Alternatives and similar repositories for EIA_against_webagent
Users that are interested in EIA_against_webagent are comparing it to the libraries listed below
Sorting:
- ☆18Jan 3, 2025Updated last year
- ☆22May 28, 2025Updated 9 months ago
- ☆23Oct 25, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆130Feb 19, 2025Updated last year
- ☆20Feb 3, 2025Updated last year
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 9 months ago
- ☆34Mar 6, 2025Updated last year
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- ☆35May 21, 2025Updated 9 months ago
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆34May 26, 2023Updated 2 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- ☆118Jul 2, 2024Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆43Jan 25, 2024Updated 2 years ago
- DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails☆31Feb 26, 2025Updated last year
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆31Jun 24, 2025Updated 8 months ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Jul 3, 2021Updated 4 years ago
- AIR-Bench 2024 is a safety benchmark that aligns with emerging government regulations and company policies☆28Aug 14, 2024Updated last year
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆16Jul 1, 2022Updated 3 years ago
- ☆28Mar 20, 2024Updated last year
- ☆20Jan 15, 2024Updated 2 years ago
- ☆178Oct 31, 2025Updated 4 months ago
- [ACL 2025] The official code for "AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection".☆33Aug 4, 2025Updated 7 months ago
- ☆32Mar 12, 2025Updated 11 months ago
- [ICLR'26 Oral] RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS Environments☆38Feb 9, 2026Updated 3 weeks ago
- ☆45Jun 11, 2023Updated 2 years ago
- A new algorithm that formulates jailbreaking as a reasoning problem.☆26Jul 2, 2025Updated 8 months ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Apr 8, 2024Updated last year
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆27Sep 11, 2024Updated last year
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Feb 10, 2023Updated 3 years ago
- [ICCVW 2025 (Oral)] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Models☆28Oct 20, 2025Updated 4 months ago
- Repo for the paper "Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks".☆52Feb 25, 2026Updated last week
- ☆21Mar 17, 2025Updated 11 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆430Jan 22, 2025Updated last year
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 9 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆33May 21, 2025Updated 9 months ago
- This is an implementation for paper "A Hybrid Approach to Privacy Preserving Federated Learning" (https://arxiv.org/pdf/1812.03224.pdf)☆24Jun 9, 2020Updated 5 years ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- ☆71Feb 16, 2025Updated last year
- ☆109Feb 16, 2024Updated 2 years ago