tssai-lab / CrowdwiseKit
A Tool Kit for Crowdsourcing Learning
☆29Updated last month
Alternatives and similar repositories for CrowdwiseKit:
Users that are interested in CrowdwiseKit are comparing it to the libraries listed below
- ☆14Updated last year
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆30Updated last year
- A Pytroch Implementation of Some Backdoor Attack Algorithms, Including BadNets, SIG, FIBA, FTrojan ...☆19Updated 4 months ago
- Watermarking LLM papers up-to-date☆13Updated last year
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆58Updated 2 years ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆112Updated this week
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajector…☆24Updated 5 months ago
- A list of recent adversarial attack and defense papers (including those on large language models)☆37Updated this week
- ☆12Updated 10 months ago
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆349Updated 4 months ago
- Code for Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks (TIFS2024)☆12Updated last year
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆124Updated 5 months ago
- ☆51Updated 4 months ago
- ☆81Updated 3 years ago
- Composite Backdoor Attacks Against Large Language Models☆13Updated last year
- ☆51Updated 3 years ago
- 使用pytorch实现FGSM☆29Updated 3 years ago
- ☆20Updated 8 months ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆102Updated 2 years ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆20Updated 7 months ago
- This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection.☆25Updated 2 years ago
- ☆220Updated 11 months ago
- [AAAI 2023] Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network☆28Updated 6 months ago
- Revisiting Transferable Adversarial Images (arXiv)☆123Updated last month
- Invisible Backdoor Attack with Sample-Specific Triggers☆94Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆81Updated last year
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆127Updated last year
- Universal Adversarial Perturbations for Vision-Language Pre-trained Models☆13Updated 3 weeks ago
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆24Updated 2 years ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆201Updated last year