mldbai / tensorflow-models
Models built with TensorFlow
☆27Updated 8 years ago
Alternatives and similar repositories for tensorflow-models:
Users that are interested in tensorflow-models are comparing it to the libraries listed below
- Deflecting Adversarial Attacks with Pixel Deflection☆70Updated 6 years ago
- ☆29Updated 6 years ago
- A PyTorch baseline attack example for the NIPS 2017 adversarial competition☆85Updated 7 years ago
- Supervised NN for pre-training. with group normalization and weight standardization☆30Updated 5 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 5 years ago
- Pytorch implementation of Real Time Image Saliency for Black Box Classifiers https://arxiv.org/abs/1705.07857☆59Updated 5 years ago
- Robustness vs Accuracy Survey on ImageNet☆98Updated 3 years ago
- This repository contains binaries for the multiple teacher approach to learning differential private ML models: https://arxiv.org/abs/161…☆10Updated 8 years ago
- A collection of methods to fool the deep neural network☆76Updated 7 months ago
- Code and manuscript for "Efficient Per-Example Gradient Computations in Convolutional Neural Networks"☆29Updated 5 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 3 years ago
- Simple classifier to classify SVHN images, based on Keras with the Tensorflow backend.☆17Updated 7 years ago
- See https://github.com/ilyakava/gan for results on Imagenet 128. Code for a Multi-Hinge Loss with K+1 Conditional GANs☆24Updated 4 years ago
- ☆66Updated 6 years ago
- ☆61Updated 2 years ago
- Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent☆45Updated 4 years ago
- Implementation of the Deep Frank-Wolfe Algorithm -- Pytorch☆62Updated 4 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- reproduction of Thermometer Encoding: One Hot Way To Resist Adversarial Examples in pytorch☆16Updated 7 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 5 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Updated 6 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)☆81Updated 7 years ago
- This repository is for NeurIPS 2018 spotlight paper "Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples."☆31Updated 3 years ago
- PyTorch Implementation of CVPR'19 - On the Intrinsic Dimensionality of Image Representation☆23Updated 5 years ago
- Blog https://medium.com/neuralmachine/knowledge-distillation-dc241d7c2322☆60Updated 6 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆34Updated 3 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆81Updated 9 months ago