pralab / secml-torch
SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models
☆35Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for secml-torch
- A Python library for Secure and Explainable Machine Learning☆153Updated last week
- A curated list of academic events on AI Security & Privacy☆135Updated 3 months ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆11Updated 5 months ago
- ☆140Updated last month
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆85Updated 3 years ago
- Code for ML Doctor☆86Updated 3 months ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆46Updated 6 years ago
- A toolbox for backdoor attacks.☆19Updated last year
- Implementations of data poisoning attacks against neural networks and related defenses.☆67Updated 4 months ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆178Updated 8 months ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 2 years ago
- A paper list for localized adversarial patch research☆141Updated 10 months ago
- ☆62Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆123Updated 7 months ago
- ☆17Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆64Updated 3 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆74Updated last year
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆157Updated 3 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 4 years ago
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆56Updated 2 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆94Updated 3 months ago
- A unified benchmark problem for data poisoning attacks☆152Updated last year
- ☆17Updated 3 years ago
- ☆290Updated this week
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆18Updated 2 years ago
- ☆18Updated 10 months ago
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 2 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆211Updated 2 years ago