ai-data-model-safety / ai-data-model-safety.github.ioLinks
☆49Updated last year
Alternatives and similar repositories for ai-data-model-safety.github.io
Users that are interested in ai-data-model-safety.github.io are comparing it to the libraries listed below
Sorting:
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆225Updated last week
- ☆73Updated 3 weeks ago
- A list of recent papers about adversarial learning☆304Updated last week
- ☆20Updated 2 years ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆27Updated 10 months ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆66Updated 6 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆66Updated 10 months ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆302Updated last month
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆235Updated last year
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆286Updated last year
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆65Updated 2 years ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Updated last year
- ☆80Updated last year
- ☆224Updated 5 months ago
- ☆57Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆485Updated 2 weeks ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆176Updated 8 months ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆105Updated 3 years ago
- ☆48Updated 10 months ago
- ☆28Updated 2 years ago
- ☆17Updated last year
- ☆30Updated last year
- ☆109Updated last year
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆31Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Updated 2 years ago
- ☆58Updated last year
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆437Updated 3 weeks ago
- ☆34Updated 2 months ago
- ☆37Updated last year