yuanchun-li / ModelDiffLinks
☆19Updated 4 years ago
Alternatives and similar repositories for ModelDiff
Users that are interested in ModelDiff are comparing it to the libraries listed below
Sorting:
- Code release for DeepJudge (S&P'22)☆52Updated 2 years ago
- ☆68Updated 5 years ago
- Code release for RobOT (ICSE'21)☆15Updated 2 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 3 years ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17Updated last year
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆19Updated last month
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Updated 10 months ago
- ☆26Updated 2 years ago
- ☆13Updated 4 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Updated last year
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling☆15Updated 10 months ago
- ☆84Updated 4 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆48Updated 7 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆26Updated 7 months ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Updated 3 years ago
- ☆18Updated 3 years ago
- This is the source code for HufuNet. Our paper is accepted by the IEEE TDSC.☆25Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 2 years ago
- Code for ML Doctor☆91Updated last year
- ☆27Updated 3 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Updated 2 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- Implementation of TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems (https://arxiv.org/pdf/190…☆18Updated 2 years ago
- ☆19Updated 3 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 4 months ago
- ☆18Updated 4 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆29Updated 4 years ago