ayulockin / LossLandscapeLinks
Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan.
☆65Updated 4 years ago
Alternatives and similar repositories for LossLandscape
Users that are interested in LossLandscape are comparing it to the libraries listed below
Sorting:
- CIFAR-5m dataset☆39Updated 4 years ago
- Code for "Supermasks in Superposition"☆124Updated last year
- Bayesianize: A Bayesian neural network wrapper in pytorch☆88Updated last year
- Codebase for Learning Invariances in Neural Networks☆95Updated 2 years ago
- Last-layer Laplace approximation code examples☆82Updated 3 years ago
- ☆133Updated 4 years ago
- ☆36Updated last year
- ☆19Updated 3 years ago
- ☆37Updated 3 years ago
- ☆15Updated 5 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.☆36Updated 4 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- ☆45Updated 4 years ago
- Collection of snippets for PyTorch users☆25Updated 3 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated last month
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆39Updated 4 years ago
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆49Updated 4 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆160Updated last year
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- ☆67Updated 6 years ago
- Ἀνατομή is a PyTorch library to analyze representation of neural networks☆64Updated 3 weeks ago
- Official PyTorch implementation of "Meta-Calibration: Learning of Model Calibration Using Differentiable Expected Calibration Error"☆36Updated last year
- Create animations for the optimization trajectory of neural nets☆156Updated last year
- Contains code for the NeurIPS 2020 paper by Pan et al., "Continual Deep Learning by FunctionalRegularisation of Memorable Past"☆44Updated 4 years ago
- ☆100Updated 3 years ago
- Implements sharpness-aware minimization (https://arxiv.org/abs/2010.01412) in TensorFlow 2.☆60Updated 3 years ago
- Reusable BatchBALD implementation☆79Updated last year
- Visualizing the the loss landscape of Fully-Connected Neural Networks☆45Updated 2 years ago