jhjacobsen / fully-invertible-revnetLinks
☆31Updated 4 years ago
Alternatives and similar repositories for fully-invertible-revnet
Users that are interested in fully-invertible-revnet are comparing it to the libraries listed below
Sorting:
- ☆88Updated 11 months ago
- A Closer Look at Accuracy vs. Robustness☆89Updated 4 years ago
- Codebase for "Exploring the Landscape of Spatial Robustness" (ICML'19, https://arxiv.org/abs/1712.02779).☆26Updated 5 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"☆160Updated 5 years ago
- ICLR 2021, Fair Mixup: Fairness via Interpolation☆56Updated 3 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Updated 2 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 5 years ago
- Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks☆66Updated 5 years ago
- ☆21Updated 11 months ago
- Code for the paper "Understanding Generalization through Visualizations"☆61Updated 4 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆26Updated 11 months ago
- Pytorch implementation of regularization methods for deep networks obtained via kernel methods.☆22Updated 5 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"☆128Updated 5 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆81Updated 11 months ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆35Updated 5 years ago
- Scaleable input gradient regularization☆22Updated 6 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- Code implementing the experiments described in the NeurIPS 2018 paper "With Friends Like These, Who Needs Adversaries?".☆13Updated 4 years ago
- Adversarially Robust Generalization Just Requires More Unlabeled Data☆11Updated 5 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 5 years ago
- Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."☆34Updated 6 years ago
- Code for NeurIPS 2019 Paper☆47Updated 5 years ago
- ☆55Updated 4 years ago
- Geometric Certifications of Neural Nets☆42Updated 2 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 4 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago