emanuele / kernel_two_sample_testLinks
A python implementation of the kernel two-samples test as in Gretton et al 2012 (JMLR).
☆34Updated 9 years ago
Alternatives and similar repositories for kernel_two_sample_test
Users that are interested in kernel_two_sample_test are comparing it to the libraries listed below
Sorting:
- Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."☆35Updated 7 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆228Updated 6 years ago
- Related materials for robust and explainable machine learning☆48Updated 8 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 3 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆62Updated 6 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆61Updated 6 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆62Updated 7 years ago
- Learning kernels to maximize the power of MMD tests☆211Updated 8 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Updated 2 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆111Updated 7 years ago
- Example code for the paper "Understanding deep learning requires rethinking generalization"☆178Updated 5 years ago
- Adversarially Robust Neural Network on MNIST.☆63Updated 4 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 7 years ago
- ☆88Updated last year
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"☆164Updated 5 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆44Updated 5 years ago
- Certifying Some Distributional Robustness with Principled Adversarial Training (https://arxiv.org/abs/1710.10571)☆45Updated 7 years ago
- Gold Loss Correction☆88Updated 7 years ago
- Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf☆141Updated 5 years ago
- OD-test: A Less Biased Evaluation of Out-of-Distribution (Outlier) Detectors (PyTorch)☆62Updated 2 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆187Updated 5 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 6 years ago
- ☆70Updated 6 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated last year
- The Ultimate Reference for Out of Distribution Detection with Deep Neural Networks☆118Updated 6 years ago
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆225Updated 5 years ago