Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"
☆111Dec 28, 2022Updated 3 years ago
Alternatives and similar repositories for text-adversarial-attack
Users that are interested in text-adversarial-attack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆52May 24, 2023Updated 2 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- An Open-Source Package for Textual Adversarial Attack.☆772Jul 20, 2023Updated 2 years ago
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆26Dec 23, 2020Updated 5 years ago
- ACL 2021 - Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble☆17Jun 12, 2023Updated 2 years ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆27Aug 27, 2024Updated last year
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.☆12Jun 20, 2023Updated 2 years ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.☆16Dec 8, 2022Updated 3 years ago
- TAP: An automated jailbreaking method for black-box LLMs☆225Dec 10, 2024Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆91May 19, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21May 30, 2023Updated 2 years ago
- Must-read Papers on Textual Adversarial Attack and Defense☆1,570Jun 4, 2025Updated 10 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆66Mar 22, 2025Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆436Jan 22, 2025Updated last year
- TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs…☆3,400Jul 10, 2025Updated 9 months ago
- This is a sample implementation of "Robust Graph Convolutional Networks Against Adversarial Attacks", KDD 2019.☆10Dec 8, 2020Updated 5 years ago
- Universal and Transferable Attacks on Aligned Language Models☆4,601Aug 2, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆571Jun 8, 2025Updated 10 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆31Jul 14, 2023Updated 2 years ago
- ☆14Jul 13, 2022Updated 3 years ago
- A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022☆34Oct 9, 2023Updated 2 years ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆117Mar 26, 2024Updated 2 years ago
- Official implement of paper: Stable Diffusion is Unstable☆22May 21, 2024Updated last year
- ☆33Jun 24, 2024Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Jul 3, 2021Updated 4 years ago
- Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks☆24Dec 11, 2020Updated 5 years ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- codes for "Searching for an Effective Defender:Benchmarking Defense against Adversarial Word Substitution"☆31Oct 27, 2023Updated 2 years ago
- ☆109Feb 16, 2024Updated 2 years ago
- Natural Language Attacks in a Hard Label Black Box Setting.☆50May 26, 2021Updated 4 years ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆54Sep 19, 2023Updated 2 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆36Jun 1, 2025Updated 10 months ago
- ☆20Oct 9, 2024Updated last year
- ☆15Nov 23, 2023Updated 2 years ago