facebookresearch / fisher_information_lossLinks
This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"
☆50Updated 3 years ago
Alternatives and similar repositories for fisher_information_loss
Users that are interested in fisher_information_loss are comparing it to the libraries listed below
Sorting:
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- ☆25Updated 5 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- ☆38Updated 4 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- ☆35Updated last year
- ☆55Updated 4 years ago
- A Closer Look at Accuracy vs. Robustness☆89Updated 4 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆61Updated 4 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)☆21Updated 2 years ago
- PRIME: A Few Primitives Can Boost Robustness to Common Corruptions☆42Updated 2 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 6 years ago
- Smooth Adversarial Training☆67Updated 4 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- ☆87Updated 11 months ago
- A fast and efficient way to compute a differentiable bound on the singular values of convolution layers☆13Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Updated 4 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆52Updated 4 years ago
- CIFAR-5m dataset☆39Updated 4 years ago
- Learning perturbation sets for robust machine learning☆65Updated 3 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- ☆29Updated 6 years ago
- Understanding and Improving Fast Adversarial Training [NeurIPS 2020]☆95Updated 3 years ago