Ighina / CERTIFAILinks
A python implementation of CERTIFAI framework for machine learning models' explainability as discussed in https://www.aies-conference.com/2020/wp-content/papers/099.pdf
☆11Updated 3 years ago
Alternatives and similar repositories for CERTIFAI
Users that are interested in CERTIFAI are comparing it to the libraries listed below
Sorting:
- code for model-targeted poisoning☆12Updated 2 years ago
- ☆30Updated 4 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆12Updated 4 years ago
- Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", publi…☆13Updated 5 years ago
- Adversarial detection and defense for deep learning systems using robust feature alignment☆18Updated 5 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆33Updated 4 years ago
- General fair regression subject to demographic parity constraint. Paper appeared in ICML 2019.☆16Updated 5 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 2 years ago
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆46Updated 6 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆27Updated last year
- A Python framework for the quantitative evaluation of eXplainable AI methods☆17Updated 2 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 5 years ago
- ☆18Updated 3 years ago
- ☆16Updated 4 years ago
- ☆19Updated 2 years ago
- Craft poisoned data using MetaPoison☆53Updated 4 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Updated 6 years ago
- Implementation of Adversarial Debiasing in PyTorch to address Gender Bias☆31Updated 5 years ago
- A library for running membership inference attacks against ML models☆151Updated 2 years ago
- TabularBench: Adversarial robustness benchmark for tabular data☆19Updated 3 weeks ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- verifying machine unlearning by backdooring☆20Updated 2 years ago
- ☆37Updated 2 years ago
- ☆13Updated 2 years ago
- Certified Removal from Machine Learning Models☆69Updated 4 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆31Updated 3 years ago
- ☆32Updated last year
- Codes for ICCV 2021 paper "AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Met…☆12Updated 3 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Updated 4 years ago