ayulockin / LossLandscape
Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan.
☆62Updated 4 years ago
Alternatives and similar repositories for LossLandscape:
Users that are interested in LossLandscape are comparing it to the libraries listed below
- Contains code for the NeurIPS 2020 paper by Pan et al., "Continual Deep Learning by FunctionalRegularisation of Memorable Past"☆44Updated 4 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Collection of snippets for PyTorch users☆26Updated 2 years ago
- Bayesianize: A Bayesian neural network wrapper in pytorch☆88Updated 8 months ago
- Supporting code for the paper "Dangers of Bayesian Model Averaging under Covariate Shift"☆33Updated 2 years ago
- CIFAR-5m dataset☆38Updated 4 years ago
- ☆19Updated 2 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 3 years ago
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆49Updated 3 years ago
- Last-layer Laplace approximation code examples☆82Updated 3 years ago
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆39Updated 4 years ago
- Code for "Supermasks in Superposition"☆121Updated last year
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆41Updated last year
- ContinualAI Wiki: a collaborative wiki on Continual/Lifelong Machine Learning☆49Updated 2 years ago
- ☆34Updated 3 years ago
- Codebase for Learning Invariances in Neural Networks☆93Updated 2 years ago
- ☆36Updated 2 years ago
- ☆44Updated 4 years ago
- Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)☆72Updated last year
- ☆35Updated last year
- Official PyTorch implementation of "Meta-Calibration: Learning of Model Calibration Using Differentiable Expected Calibration Error"☆32Updated last year
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- ☆15Updated 4 years ago
- ☆134Updated 3 years ago
- ☆67Updated 5 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- ☆34Updated 3 years ago
- Official Code Repository for La-MAML: Look-Ahead Meta-Learning for Continual Learning"☆74Updated 4 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago