LTS4 / hold-me-tightLinks
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"
☆22Updated 3 years ago
Alternatives and similar repositories for hold-me-tight
Users that are interested in hold-me-tight are comparing it to the libraries listed below
Sorting:
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- PRIME: A Few Primitives Can Boost Robustness to Common Corruptions☆42Updated 2 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- Code for "Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources". (IC…☆38Updated 4 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- ☆38Updated 4 years ago
- A Closer Look at Accuracy vs. Robustness☆89Updated 4 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- Learning perturbation sets for robust machine learning☆65Updated 3 years ago
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Updated 4 years ago
- [ICLR 2022] "Sparsity Winning Twice: Better Robust Generalization from More Efficient Training" by Tianlong Chen*, Zhenyu Zhang*, Pengjun…☆39Updated 3 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 3 years ago
- Fine-grained ImageNet annotations☆29Updated 5 years ago
- ☆16Updated 3 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- This is the official implementation of ClusTR: Clustering Training for Robustness paper.☆20Updated 3 years ago
- ICML 2020, Estimating Generalization under Distribution Shifts via Domain-Invariant Representations☆23Updated 5 years ago
- Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)☆21Updated 2 years ago
- ☆25Updated 5 years ago
- Certified Patch Robustness via Smoothed Vision Transformers☆42Updated 3 years ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 5 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Updated 4 years ago
- [NeurIPS2020] The official repository of "AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows".☆47Updated last year
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Self-Distillation with weighted ground-truth targets; ResNet and Kernel Ridge Regression☆18Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2…☆23Updated 4 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago
- ICLR 2021, Fair Mixup: Fairness via Interpolation☆56Updated 3 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago