wronnyhuang / gen-vizLinks
Code for the paper "Understanding Generalization through Visualizations"
☆64Updated 4 years ago
Alternatives and similar repositories for gen-viz
Users that are interested in gen-viz are comparing it to the libraries listed below
Sorting:
- ☆55Updated 5 years ago
- A Closer Look at Accuracy vs. Robustness☆88Updated 4 years ago
- ☆38Updated 4 years ago
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆30Updated 5 years ago
- Training vision models with full-batch gradient descent and regularization☆39Updated 2 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- ☆88Updated last year
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆42Updated last year
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆50Updated 4 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 5 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- ☆62Updated 4 years ago
- ☆59Updated 2 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆43Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆36Updated 3 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- Winning Solution of the NeurIPS 2020 Competition on Predicting Generalization in Deep Learning☆41Updated 4 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- Computing various measures and generalization bounds on convolutional and fully connected networks☆35Updated 6 years ago
- ICLR 2021, Fair Mixup: Fairness via Interpolation☆59Updated 4 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆72Updated last year
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆26Updated last year
- Coresets via Bilevel Optimization☆67Updated 5 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- ☆33Updated 4 years ago
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians (ICML 2019)☆16Updated 6 years ago