Ighina / CERTIFAILinks
A python implementation of CERTIFAI framework for machine learning models' explainability as discussed in https://www.aies-conference.com/2020/wp-content/papers/099.pdf
☆11Updated 3 years ago
Alternatives and similar repositories for CERTIFAI
Users that are interested in CERTIFAI are comparing it to the libraries listed below
Sorting:
- Attack benchmark repository☆21Updated 2 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 3 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- Adversarial detection and defense for deep learning systems using robust feature alignment☆18Updated 5 years ago
- Creating and defending against adversarial examples☆41Updated 7 years ago
- Detection of adversarial examples using influence functions and nearest neighbors☆37Updated 3 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆24Updated 3 years ago
- Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", publi…☆13Updated 5 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆334Updated last year
- Craft poisoned data using MetaPoison☆54Updated 4 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆13Updated 4 years ago
- Implementation for "Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks"☆14Updated 5 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- ☆16Updated 4 years ago
- Implementation of Adversarial Debiasing in PyTorch to address Gender Bias☆31Updated 5 years ago
- Detect adversarial images from intermediate features in distance space☆12Updated 7 years ago
- code for model-targeted poisoning☆12Updated 2 years ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆18Updated 11 months ago
- Code for the Adversarial Image Detectors and a Saliency Map☆12Updated 8 years ago
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆19Updated 3 years ago
- Codes for ICCV 2021 paper "AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Met…☆12Updated 3 years ago
- ☆67Updated 6 years ago
- ☆26Updated 7 years ago
- Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial …☆16Updated 4 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆85Updated 2 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Updated 5 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Updated 6 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defe…☆138Updated 5 years ago
- Datasets derived from US census data☆276Updated last year