machinelearning4health / TextHoaxerLinks
Implementation Code of TextHoaxer
☆15Updated 3 years ago
Alternatives and similar repositories for TextHoaxer
Users that are interested in TextHoaxer are comparing it to the libraries listed below
Sorting:
- Natural Language Attacks in a Hard Label Black Box Setting.☆48Updated 4 years ago
- An Open-Source Package for Textual Adversarial Attack.☆757Updated 2 years ago
- An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)☆196Updated 2 years ago
- Must-read Papers on Textual Adversarial Attack and Defense☆1,575Updated 6 months ago
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆302Updated 3 months ago
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆43Updated 3 years ago
- Hidden backdoor attack on NLP systems☆47Updated 4 years ago
- [Findings of ACL 2023] Bridge the Gap Between CV and NLP! A Optimization-based Textual Adversarial Attack Framework.☆14Updated 2 years ago
- ☆150Updated last year
- A Model for Natural Language Attack on Text Classification and Inference☆525Updated 3 years ago
- ☆11Updated 5 years ago
- Code and data of the EMNLP 2021 paper "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer"☆46Updated 3 years ago
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆109Updated 2 years ago
- ☆26Updated last year
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆229Updated last year
- TrojanLM: Trojaning Language Models for Fun and Profit☆16Updated 4 years ago
- ☆19Updated last year
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆192Updated 3 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆282Updated 11 months ago
- ☆15Updated last year
- Bad Characters: Imperceptible NLP Attacks☆35Updated last year
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆310Updated 5 years ago
- ☆11Updated 3 years ago
- Composite Backdoor Attacks Against Large Language Models☆21Updated last year
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆23Updated 3 months ago
- ☆15Updated 2 years ago
- A list of recent adversarial attack and defense papers (including those on large language models)☆44Updated this week
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆32Updated 4 years ago
- Paper list of Adversarial Examples☆52Updated 2 years ago
- ☆36Updated last year