Ighina / CERTIFAILinks
A python implementation of CERTIFAI framework for machine learning models' explainability as discussed in https://www.aies-conference.com/2020/wp-content/papers/099.pdf
☆11Updated 3 years ago
Alternatives and similar repositories for CERTIFAI
Users that are interested in CERTIFAI are comparing it to the libraries listed below
Sorting:
- Adversarial detection and defense for deep learning systems using robust feature alignment☆18Updated 5 years ago
- Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", publi…☆13Updated 5 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆85Updated 3 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆24Updated 3 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆13Updated 4 years ago
- Craft poisoned data using MetaPoison☆54Updated 4 years ago
- Creating and defending against adversarial examples☆41Updated 7 years ago
- ☆11Updated 2 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- Adversarial attacks including DeepFool and C&W☆13Updated 6 years ago
- code for model-targeted poisoning☆12Updated 2 years ago
- Adversarial Black box Explainer generating Latent Exemplars☆11Updated 3 years ago
- PrivGAN: Protecting GANs from membership inference attacks at low cost☆36Updated last year
- Implementation for "Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks"☆14Updated 5 years ago
- Detecting Adversarial Examples in Deep Neural Networks☆68Updated 7 years ago
- This repository contains the official PyTorch implementation of GeoDA algorithm. GeoDA is a Black-box attack to generate adversarial exam…☆34Updated 4 years ago
- Visualization of Adversarial Examples☆34Updated 7 years ago
- ☆26Updated 6 years ago
- Codes for ICCV 2021 paper "AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Met…☆12Updated 3 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆332Updated last year
- ☆23Updated 2 years ago
- ☆31Updated 4 years ago
- ☆19Updated 2 years ago
- ☆16Updated 4 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- Attack benchmark repository☆21Updated last month
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆63Updated 4 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- ConvexPolytopePosioning☆37Updated 6 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆111Updated last year