sarthmit / Mod_ArchLinks
☆34Updated last year
Alternatives and similar repositories for Mod_Arch
Users that are interested in Mod_Arch are comparing it to the libraries listed below
Sorting:
- ☆96Updated 3 years ago
- ☆85Updated last year
- ☆81Updated last year
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆68Updated 3 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 4 years ago
- Code for "Supermasks in Superposition"☆124Updated 2 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated 2 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 3 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Code to reproduce the results for Compositional Attention☆59Updated 3 years ago
- [ICLR2024] (EvALign-ICL Benchmark) Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context …☆22Updated last year
- Code for paper "Can contrastive learning avoid shortcut solutions?" NeurIPS 2021.☆47Updated 3 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 5 years ago
- Benchmark for Lifelong learning research☆118Updated 4 years ago
- An implementation of 2021 paper by Geoffrey Hinton: "How to represent part-whole hierarchies in a neural network" in Pytorch.☆57Updated 4 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- ☆19Updated 3 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- ☆109Updated 2 years ago
- ☆36Updated 5 years ago
- Patching open-vocabulary models by interpolating weights☆91Updated 2 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 3 years ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆72Updated 2 years ago
- ☆98Updated 2 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Updated 5 years ago
- Repository for the paper Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning☆36Updated 2 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆50Updated 6 months ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Updated 4 years ago
- ☆37Updated 2 years ago
- Offical Repo for Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Accepted by Neurips 2020.☆34Updated 5 years ago