Ighina / CERTIFAILinks
A python implementation of CERTIFAI framework for machine learning models' explainability as discussed in https://www.aies-conference.com/2020/wp-content/papers/099.pdf
☆11Updated 3 years ago
Alternatives and similar repositories for CERTIFAI
Users that are interested in CERTIFAI are comparing it to the libraries listed below
Sorting:
- Adversarial detection and defense for deep learning systems using robust feature alignment☆17Updated 4 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆12Updated 3 years ago
- ☆30Updated 3 years ago
- Implementation of Adversarial Debiasing in PyTorch to address Gender Bias☆31Updated 4 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", publi…☆13Updated 5 years ago
- Implementation of https://github.com/PurduePAML/TrojanNN☆9Updated 6 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 5 years ago
- ☆11Updated 4 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆26Updated last year
- ☆9Updated 4 years ago
- ☆16Updated 3 years ago
- ☆11Updated 3 years ago
- ConvexPolytopePosioning☆35Updated 5 years ago
- ☆11Updated 2 years ago
- Detection of adversarial examples using influence functions and nearest neighbors☆36Updated 2 years ago
- Craft poisoned data using MetaPoison☆52Updated 4 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 3 years ago
- ☆25Updated 6 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆12Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago
- Creating and defending against adversarial examples☆42Updated 6 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆319Updated 7 months ago
- code for model-targeted poisoning☆12Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆31Updated 4 years ago
- ☆32Updated 10 months ago
- This repository contains implementation of 4 adversarial attacks : FGSM, Basic Iterative Method, Projected Gradient Descent(Madry's Attac…☆31Updated 6 years ago
- Certified Removal from Machine Learning Models☆67Updated 3 years ago
- Implementation for "Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks"☆14Updated 4 years ago