RobertCsordas / modulesLinks
The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We develop a method for analyzing emerging functional modularity in neural networks based on differentiable weight masks and use it to point out important issues in current-day neural networks.
☆46Updated last year
Alternatives and similar repositories for modules
Users that are interested in modules are comparing it to the libraries listed below
Sorting:
- ☆62Updated 4 years ago
- ☆34Updated 4 years ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆31Updated 5 years ago
- ☆46Updated 2 years ago
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- ☆55Updated 5 years ago
- ☆34Updated 4 years ago
- ☆65Updated last year
- "Predict, then Interpolate: A Simple Algorithm to Learn Stable Classifiers" ICML 2021☆18Updated 4 years ago
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆40Updated 4 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆72Updated last year
- [ICLR'22] Self-supervised learning optimally robust representations for domain shift.☆24Updated 3 years ago
- ☆37Updated last year
- ☆31Updated last year
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Group-conditional DRO to alleviate spurious correlations☆15Updated 4 years ago
- Energy-Based Models for Continual Learning Official Repository (PyTorch)☆42Updated 2 years ago
- ☆60Updated 3 years ago
- Code to reproduce the results for Compositional Attention☆60Updated 2 years ago
- ☆20Updated 5 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆84Updated last year
- Compositional Explanations of Neurons, NeurIPS 2020 https://arxiv.org/abs/2006.14032☆25Updated 4 years ago
- The Pitfalls of Simplicity Bias in Neural Networks [NeurIPS 2020] (http://arxiv.org/abs/2006.07710v2)☆41Updated last year
- [ICML'21] Improved Contrastive Divergence Training of Energy Based Models☆63Updated 3 years ago
- ☆108Updated last year