TooTouch / SIDLinks
pytorch reimplementation for Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain
☆11Updated 3 years ago
Alternatives and similar repositories for SID
Users that are interested in SID are comparing it to the libraries listed below
Sorting:
- Towards Machine Unlearning Benchmarks: Forgetting the Personal Identities in Facial Recognition Systems☆65Updated 7 months ago
- [NeurIPS 2021] Official PyTorch Implementation for "Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bott…☆49Updated 2 years ago
- ☆15Updated 2 years ago
- Robust natural language watermarking using invariant features☆28Updated 2 years ago
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Updated 2 years ago
- ☆52Updated last year
- ☆53Updated 2 years ago
- ☆24Updated 9 months ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆46Updated 3 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆37Updated 8 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Updated last year
- Consistency Regularization for Adversarial Robustness (AAAI 2022)☆53Updated 4 years ago
- ☆13Updated 4 years ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- ☆21Updated 2 years ago
- CVPR2022☆27Updated last year
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆25Updated last year
- ☆18Updated 3 years ago
- [CVPR 2023] Official PyTorch Implementation for "Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust N…☆45Updated 2 years ago
- Robustify Black-Box Models (ICLR'22 - Spotlight)☆24Updated 2 years ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Updated last year
- ☆43Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated last year
- ☆10Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 3 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 10 months ago
- ☆20Updated last month
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆81Updated 11 months ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆10Updated last year