T0hsakar1n / RAPIDLinks
Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"
☆18Updated 7 months ago
Alternatives and similar repositories for RAPID
Users that are interested in RAPID are comparing it to the libraries listed below
Sorting:
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆39Updated 8 months ago
- ☆56Updated last month
- ☆21Updated last year
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆23Updated last year
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- ☆31Updated 3 months ago
- ☆20Updated 2 years ago
- ☆30Updated 8 months ago
- ☆13Updated last year
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆56Updated 6 months ago
- ☆24Updated last year
- ☆16Updated 4 months ago
- ☆40Updated 3 years ago
- A list of recent papers about adversarial learning☆180Updated this week
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆29Updated 6 months ago
- ☆48Updated 11 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆265Updated 6 months ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆20Updated 3 months ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆24Updated last year
- Composite Backdoor Attacks Against Large Language Models☆16Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 11 months ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆16Updated 8 months ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 11 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 weeks ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆16Updated last year
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆213Updated last year
- ☆18Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated 2 years ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆55Updated 3 months ago