emanuele / kernel_two_sample_test
A python implementation of the kernel two-samples test as in Gretton et al 2012 (JMLR).
☆33Updated 9 years ago
Alternatives and similar repositories for kernel_two_sample_test:
Users that are interested in kernel_two_sample_test are comparing it to the libraries listed below
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆62Updated 5 years ago
- Certifying Some Distributional Robustness with Principled Adversarial Training (https://arxiv.org/abs/1710.10571)☆45Updated 7 years ago
- Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."☆34Updated 6 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆61Updated 6 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- This is the source code for Learning Deep Kernels for Non-Parametric Two-Sample Tests (ICML2020).☆49Updated 3 years ago
- Overcoming Catastrophic Forgetting by Incremental Moment Matching (IMM)☆35Updated 7 years ago
- [ICML 2019] ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation☆54Updated 2 weeks ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Updated 2 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆44Updated 5 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆61Updated 5 years ago
- ☆87Updated 9 months ago
- Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf☆141Updated 5 years ago
- The code for the paper: https://arxiv.org/abs/1806.06317☆24Updated 5 years ago
- Code for Invariant Rep. Without Adversaries (NIPS 2018)☆35Updated 5 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 2 years ago
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- Tiny Tutorial on https://arxiv.org/abs/1703.04730☆13Updated 5 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 6 years ago
- Improving the Generalization of Adversarial Training with Domain Adaptation☆33Updated 6 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆109Updated 7 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 5 years ago
- ☆13Updated 6 years ago
- Implementation of Invariant Risk Minimization https://arxiv.org/abs/1907.02893☆86Updated 5 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Code for paper "Dimensionality-Driven Learning with Noisy Labels" - ICML 2018☆58Updated 10 months ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆123Updated 4 years ago