GhadaSokar / WAST
[NeurIPS2022] Where to Pay Attention in Sparse Training for Feature Selection?
☆12Updated 2 years ago
Alternatives and similar repositories for WAST:
Users that are interested in WAST are comparing it to the libraries listed below
- Official repository of "Pareto Manifold Learning: Tackling multiple tasks via ensembles of single-task models" [ICML 2023]☆16Updated 3 months ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Official PyTorch Implementation of Federated Learning with Positive and Unlabeled Data☆10Updated 2 years ago
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆90Updated 3 years ago
- ☆20Updated 3 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated last year
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Code for testing DCT plus Sparse (DCTpS) networks☆14Updated 3 years ago
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆29Updated last year
- ☆35Updated 2 years ago
- ☆26Updated last year
- [ICLR 2022] Official Code Repository for "TRGP: TRUST REGION GRADIENT PROJECTION FOR CONTINUAL LEARNING"☆21Updated 2 years ago
- Reimplmentation of Visualizing the Loss Landscape of Neural Nets with PyTorch 1.8☆27Updated 2 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆28Updated 3 years ago
- [NeurIPS 2023] Understanding and Improving Feature Learning for Out-of-Distribution Generalization☆29Updated 10 months ago
- This is the implementation for the NeurIPS 2022 paper: ZIN: When and How to Learn Invariance Without Environment Partition?☆22Updated 2 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆44Updated last year
- Repository for the NeurIPS 2023 paper "Beyond Confidence: Reliable Models Should Also Consider Atypicality"☆12Updated last year
- ☆20Updated 3 years ago
- Prospect Pruning: Finding Trainable Weights at Initialization Using Meta-Gradients☆31Updated 3 years ago