tssai-lab / CrowdwiseKit
A Tool Kit for Crowdsourcing Learning
☆29Updated 2 weeks ago
Alternatives and similar repositories for CrowdwiseKit:
Users that are interested in CrowdwiseKit are comparing it to the libraries listed below
- A Pytroch Implementation of Some Backdoor Attack Algorithms, Including BadNets, SIG, FIBA, FTrojan ...☆15Updated 3 months ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆59Updated 2 years ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆108Updated last year
- ☆11Updated 9 months ago
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajector…☆24Updated 4 months ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆94Updated 2 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆121Updated 4 months ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆170Updated 2 years ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆101Updated 2 years ago
- ☆13Updated last year
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆36Updated 8 months ago
- A list of recent adversarial attack and defense papers (including those on large language models)☆37Updated this week
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆31Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆338Updated 3 months ago
- Watermarking LLM papers up-to-date☆13Updated last year
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆21Updated last year
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆58Updated last year
- 复现了下Neural Cleanse这篇论文,真的是简单而有效,发在了okaland☆30Updated 3 years ago
- ☆21Updated 3 weeks ago
- Anti-Backdoor learning (NeurIPS 2021)☆82Updated last year
- Universal Adversarial Perturbations for Vision-Language Pre-trained Models☆13Updated this week
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆125Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆31Updated 4 months ago
- ☆51Updated 3 years ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆50Updated last week
- ☆13Updated last year
- Code for the paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity"☆55Updated last year
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆24Updated 2 years ago
- Composite Backdoor Attacks Against Large Language Models☆13Updated 11 months ago