StephanZheng / neural-fingerprintingLinks
☆29Updated 6 years ago
Alternatives and similar repositories for neural-fingerprinting
Users that are interested in neural-fingerprinting are comparing it to the libraries listed below
Sorting:
- ☆21Updated 10 months ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Codebase for "Exploring the Landscape of Spatial Robustness" (ICML'19, https://arxiv.org/abs/1712.02779).☆26Updated 5 years ago
- ☆87Updated 10 months ago
- Analysis of Adversarial Logit Pairing☆60Updated 6 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- ☆25Updated 5 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- ☆18Updated 5 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆52Updated 4 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆81Updated 10 months ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- code for the paper https://arxiv.org/abs/1902.00407☆13Updated 6 years ago
- Implementation of our NeurIPS 2019 paper: Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks☆10Updated 5 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆35Updated 4 years ago
- This github repository contains the official code for the paper, "Evolving Robust Neural Architectures to Defend from Adversarial Attacks…☆18Updated last year
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 5 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 3 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 6 years ago
- Implementation for What it Thinks is Important is Important: Robustness Transfers through Input Gradients (CVPR 2020 Oral)☆16Updated 2 years ago
- ☆34Updated 6 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 6 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- ☆25Updated 5 years ago
- Repository for our ICCV 2019 paper: Adversarial Defense via Learning to Generate Diverse Attacks☆22Updated 3 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"☆128Updated 5 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 2 years ago