xiaozhanguva / Cost-Sensitive-RobustnessLinks
A general method for training cost-sensitive robust classifier
☆22Updated 6 years ago
Alternatives and similar repositories for Cost-Sensitive-Robustness
Users that are interested in Cost-Sensitive-Robustness are comparing it to the libraries listed below
Sorting:
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 3 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆62Updated 6 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 7 years ago
- ☆22Updated 5 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 6 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆68Updated 5 months ago
- Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2…☆23Updated 5 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 6 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- ☆19Updated 4 years ago
- A Closer Look at Accuracy vs. Robustness☆88Updated 4 years ago
- Learning perturbation sets for robust machine learning☆65Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 3 years ago
- Implementation for What it Thinks is Important is Important: Robustness Transfers through Input Gradients (CVPR 2020 Oral)☆16Updated 2 years ago
- Code for the Adversarial Image Detectors and a Saliency Map☆12Updated 8 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 7 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆49Updated 5 years ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 5 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Research prototype of deletion efficient k-means algorithms☆24Updated 5 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago
- Adversarial Robustness on In- and Out-Distribution Improves Explainability☆12Updated 3 years ago
- Reverse Cross Entropy for Adversarial Detection (NeurIPS 2018)☆47Updated 4 years ago
- Interval attacks (adversarial ML)☆21Updated 6 years ago
- ICML'20: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust☆17Updated 5 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 6 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 4 years ago