hongyanz / TRADES-smoothing
[JMLR] TRADES + random smoothing for certifiable robustness
☆14Updated 4 years ago
Alternatives and similar repositories for TRADES-smoothing:
Users that are interested in TRADES-smoothing are comparing it to the libraries listed below
- ☆19Updated 5 years ago
- This repo contains the code used for NeurIPS 2019 paper "Asymmetric Valleys: Beyond Sharp and Flat Local Minima".☆14Updated 5 years ago
- Code base for SRSGD.☆28Updated 5 years ago
- SGD and Ordered SGD codes for deep learning, SVM, and logistic regression☆35Updated 4 years ago
- ☆13Updated 6 years ago
- SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning☆23Updated 6 years ago
- Encodings for neural architecture search☆29Updated 4 years ago
- Local search for NAS☆18Updated 4 years ago
- ☆12Updated 5 years ago
- Towards Robust ResNet: A Small Step but a Giant Leap (IJCAI 2019)☆9Updated 4 years ago
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 3 years ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 5 years ago
- Implementation of Information Dropout☆39Updated 7 years ago
- Distributional and Outlier Robust Optimization (ICML 2021)☆27Updated 3 years ago
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- ☆38Updated 3 years ago
- Geometric Certifications of Neural Nets☆41Updated 2 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆60Updated 4 years ago
- Interpolation between Residual and Non-Residual Networks, ICML 2020. https://arxiv.org/abs/2006.05749☆26Updated 4 years ago
- ☆35Updated last year
- Official adversarial mixup resynthesis repository☆35Updated 5 years ago
- ☆34Updated 4 years ago
- ☆55Updated 4 years ago
- ☆19Updated 4 years ago
- ☆21Updated 5 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆34Updated 4 years ago
- ☆22Updated 2 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆21Updated 4 years ago
- Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)☆28Updated 6 years ago