google-research / sam
☆588Updated 4 months ago
Alternatives and similar repositories for sam:
Users that are interested in sam are comparing it to the libraries listed below
- SAM: Sharpness-Aware Minimization (PyTorch)☆1,846Updated last year
- PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning☆490Updated 2 years ago
- A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization☆136Updated 4 years ago
- A LARS implementation in PyTorch☆343Updated 5 years ago
- Compare neural networks by their feature similarity☆354Updated last year
- An All-MLP solution for Vision, from Google AI☆1,015Updated 6 months ago
- A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quan…☆619Updated 2 years ago
- Implementation of ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks, ICML 2021.☆141Updated 3 years ago
- Reproduce CKA: Similarity of Neural Network Representations Revisited☆298Updated 4 years ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆345Updated last year
- PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results☆202Updated 10 months ago
- Approximating neural network loss landscapes in low-dimensional parameter subspaces for PyTorch☆323Updated last year
- Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch☆1,799Updated 8 months ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,070Updated 2 years ago
- PyTorch implementation of SimSiam https//arxiv.org/abs/2011.10566☆1,183Updated 2 years ago
- Code for Neural Architecture Search without Training (ICML 2021)☆465Updated 3 years ago
- A pytorch implementation for paper 'Exploring Simple Siamese Representation Learning'☆827Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆811Updated 2 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆520Updated 4 months ago
- Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)☆343Updated 7 months ago
- AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty☆984Updated this week
- Official Implementation of Early-Learning Regularization Prevents Memorization of Noisy Labels☆295Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆751Updated 10 months ago
- ☆320Updated this week
- Unofficial PyTorch Reimplementation of RandAugment.☆633Updated 2 years ago
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,610Updated 2 years ago
- Open-source code for paper "Dataset Distillation"☆792Updated 2 years ago
- PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks☆726Updated 11 months ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆483Updated 3 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆466Updated 3 years ago