Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"
☆111Dec 28, 2022Updated 3 years ago
Alternatives and similar repositories for text-adversarial-attack
Users that are interested in text-adversarial-attack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆52May 24, 2023Updated 2 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated 2 years ago
- Code for EMNLP2020 long paper: BERT-Attack: Adversarial Attack Against BERT Using BERT☆207Sep 22, 2020Updated 5 years ago
- An Open-Source Package for Textual Adversarial Attack.☆774Jul 20, 2023Updated 2 years ago
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆26Dec 23, 2020Updated 5 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ACL 2021 - Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble☆17Jun 12, 2023Updated 2 years ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆273May 13, 2024Updated last year
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆27Aug 27, 2024Updated last year
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.☆12Jun 20, 2023Updated 2 years ago
- [Findings of ACL 2023] Bridge the Gap Between CV and NLP! A Optimization-based Textual Adversarial Attack Framework.☆14Aug 27, 2023Updated 2 years ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.☆16Dec 8, 2022Updated 3 years ago
- TAP: An automated jailbreaking method for black-box LLMs☆228Dec 10, 2024Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆91May 19, 2024Updated last year
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21May 30, 2023Updated 2 years ago
- Must-read Papers on Textual Adversarial Attack and Defense☆1,571Jun 4, 2025Updated 11 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆67Mar 22, 2025Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆440Jan 22, 2025Updated last year
- ☆77Jan 21, 2026Updated 3 months ago
- TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs…☆3,409Apr 17, 2026Updated 2 weeks ago
- This is a sample implementation of "Robust Graph Convolutional Networks Against Adversarial Attacks", KDD 2019.☆10Dec 8, 2020Updated 5 years ago
- Papers and resources related to the security and privacy of LLMs 🤖☆576Jun 8, 2025Updated 10 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 11 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,638Aug 2, 2024Updated last year
- ☆31Jul 14, 2023Updated 2 years ago
- ☆14Jul 13, 2022Updated 3 years ago
- A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022☆34Oct 9, 2023Updated 2 years ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆119Mar 26, 2024Updated 2 years ago
- Official implement of paper: Stable Diffusion is Unstable☆22May 21, 2024Updated last year
- ☆33Jun 24, 2024Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆56Aug 17, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Jul 3, 2021Updated 4 years ago
- Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks☆24Dec 11, 2020Updated 5 years ago
- codes for "Searching for an Effective Defender:Benchmarking Defense against Adversarial Word Substitution"☆31Oct 27, 2023Updated 2 years ago
- ☆109Feb 16, 2024Updated 2 years ago
- Natural Language Attacks in a Hard Label Black Box Setting.☆49May 26, 2021Updated 4 years ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆54Sep 19, 2023Updated 2 years ago