MLI-lab / early_stopping_double_descentLinks
Code for reproducing figures and results in the paper ``Early stopping in deep networks: Double descent and how to eliminate it''
☆15Updated 3 years ago
Alternatives and similar repositories for early_stopping_double_descent
Users that are interested in early_stopping_double_descent are comparing it to the libraries listed below
Sorting:
- ☆37Updated 2 years ago
- MDL Complexity computations and experiments from the paper "Revisiting complexity and the bias-variance tradeoff".☆18Updated 2 years ago
- Code for the intrinsic dimensionality estimate of data representations☆86Updated 5 years ago
- Code for "Supermasks in Superposition"☆124Updated 2 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- A simple algorithm to identify and correct for label shift.☆22Updated 7 years ago
- Winning Solution of the NeurIPS 2020 Competition on Predicting Generalization in Deep Learning☆41Updated 4 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆65Updated 4 years ago
- An Empirical Study of Invariant Risk Minimization☆27Updated 5 years ago
- Toy datasets to evaluate algorithms for domain generalization and invariance learning.☆43Updated 4 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 4 years ago
- Computing various norms/measures on over-parametrized neural networks☆50Updated 7 years ago
- ☆63Updated 5 years ago
- Python package for evaluating model calibration in classification☆20Updated 6 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆43Updated 2 years ago
- ☆21Updated 5 years ago
- Resources and Implementations (PyTorch) for Information Theoretical concepts in Deep Learning☆45Updated 6 years ago
- ☆26Updated 5 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆36Updated 5 years ago
- ☆62Updated 4 years ago
- Implementation of Information Dropout☆39Updated 8 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated 2 years ago
- CIFAR-5m dataset☆40Updated 5 years ago
- ☆12Updated 6 years ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.☆51Updated 4 years ago
- ☆43Updated 7 years ago
- Guarantees on the behavior of neural networks don't always have to come at the cost of performance.☆29Updated 3 years ago
- Non-Parametric Calibration for Classification (AISTATS 2020)☆19Updated 3 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 5 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago