dydjw9 / Efficient_SAMLinks
☆58Updated 2 years ago
Alternatives and similar repositories for Efficient_SAM
Users that are interested in Efficient_SAM are comparing it to the libraries listed below
Sorting:
- PyTorch repository for ICLR 2022 paper (GSAM) which improves generalization (e.g. +3.8% top-1 accuracy on ImageNet with ViT-B/32)☆144Updated 3 years ago
- ☆34Updated 4 months ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆88Updated 3 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆57Updated 2 years ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆83Updated 3 years ago
- Official implementation of paper Gradient Matching for Domain Generalization☆122Updated 3 years ago
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆59Updated 3 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Z…☆125Updated 3 years ago
- ☆57Updated 3 years ago
- PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures (CVPR 2022)☆109Updated 3 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆50Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 3 years ago
- Github code for the paper Maximum Class Separation as Inductive Bias in One Matrix. Arxiv link: https://arxiv.org/abs/2206.08704☆29Updated 2 years ago
- Official Implementation for PlugIn Inversion☆16Updated 3 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated 2 years ago
- ☆110Updated 2 years ago
- PRIME: A Few Primitives Can Boost Robustness to Common Corruptions☆42Updated 2 years ago
- ☆11Updated 2 years ago
- Robust Contrastive Learning Using Negative Samples with Diminished Semantics (NeurIPS 2021)☆39Updated 3 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆56Updated 3 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago
- ImageNetV2 Pytorch Dataset☆41Updated 2 years ago
- Whitening for Self-Supervised Representation Learning | Official repository☆131Updated 2 years ago
- ☆36Updated 2 years ago
- ☆34Updated last year
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- ☆18Updated 2 years ago
- Repo for the paper: "Agree to Disagree: Diversity through Disagreement for Better Transferability"☆36Updated 2 years ago