Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"
☆109Dec 28, 2022Updated 3 years ago
Alternatives and similar repositories for text-adversarial-attack
Users that are interested in text-adversarial-attack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆52May 24, 2023Updated 2 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- Code for EMNLP2020 long paper: BERT-Attack: Adversarial Attack Against BERT Using BERT☆206Sep 22, 2020Updated 5 years ago
- An Open-Source Package for Textual Adversarial Attack.☆772Jul 20, 2023Updated 2 years ago
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆27Dec 23, 2020Updated 5 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆269May 13, 2024Updated last year
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆27Aug 27, 2024Updated last year
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.☆12Jun 20, 2023Updated 2 years ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- [Findings of ACL 2023] Bridge the Gap Between CV and NLP! A Optimization-based Textual Adversarial Attack Framework.☆14Aug 27, 2023Updated 2 years ago
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.☆16Dec 8, 2022Updated 3 years ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆43Jan 25, 2024Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21May 30, 2023Updated 2 years ago
- Must-read Papers on Textual Adversarial Attack and Defense☆1,571Jun 4, 2025Updated 9 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆66Mar 22, 2025Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆434Jan 22, 2025Updated last year
- ☆76Jan 21, 2026Updated 2 months ago
- TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs…☆3,390Jul 10, 2025Updated 8 months ago
- [AAAI 2024] DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models☆12Dec 5, 2024Updated last year
- This is a sample implementation of "Robust Graph Convolutional Networks Against Adversarial Attacks", KDD 2019.☆10Dec 8, 2020Updated 5 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Papers and resources related to the security and privacy of LLMs 🤖☆567Jun 8, 2025Updated 9 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 10 months ago
- ☆31Jul 14, 2023Updated 2 years ago
- ☆14Jul 13, 2022Updated 3 years ago
- A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022☆34Oct 9, 2023Updated 2 years ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- Official implement of paper: Stable Diffusion is Unstable☆22May 21, 2024Updated last year
- ☆33Jun 24, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆56Aug 17, 2024Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Jul 3, 2021Updated 4 years ago
- Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks☆24Dec 11, 2020Updated 5 years ago
- codes for "Searching for an Effective Defender:Benchmarking Defense against Adversarial Word Substitution"☆31Oct 27, 2023Updated 2 years ago
- Repo for the paper "Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks".☆55Updated this week
- ☆109Feb 16, 2024Updated 2 years ago