shenyizg / NewAdversarialAttackPaperLinks
A list of recent adversarial attack and defense papers (including those on large language models)
☆42Updated last week
Alternatives and similar repositories for NewAdversarialAttackPaper
Users that are interested in NewAdversarialAttackPaper are comparing it to the libraries listed below
Sorting:
- ☆33Updated 10 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆184Updated 6 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆269Updated 7 months ago
- A list of recent papers about adversarial learning☆199Updated this week
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆37Updated last year
- A curated list of trustworthy Generative AI papers. Daily updating...☆73Updated 11 months ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆39Updated 2 months ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆43Updated 7 months ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆216Updated last year
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆53Updated this week
- ☆50Updated last year
- ☆35Updated 11 months ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆16Updated last year
- An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)☆189Updated 2 years ago
- ☆102Updated last year
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆89Updated 11 months ago
- ☆59Updated 3 months ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆27Updated 2 years ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆61Updated 8 months ago
- Official repository for CVPR'23 paper: Detecting Backdoors in Pre-trained Encoders☆35Updated last year
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆16Updated 10 months ago
- ☆61Updated 8 months ago
- ☆25Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆234Updated last year
- ☆47Updated last year
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 months ago
- ☆223Updated 2 weeks ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆40Updated 9 months ago
- ☆24Updated 2 years ago