ehuynh1106 / TinyImageNet-Transformers
Transformers trained on Tiny ImageNet
☆49Updated 2 years ago
Alternatives and similar repositories for TinyImageNet-Transformers:
Users that are interested in TinyImageNet-Transformers are comparing it to the libraries listed below
- ☆57Updated last year
- Implementation of HAT https://arxiv.org/pdf/2204.00993☆48Updated 9 months ago
- ☆33Updated 2 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- This repo shows an easy way to download imagenet on a remote server☆45Updated 2 years ago
- This repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".☆45Updated 2 years ago
- ☆16Updated 2 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆25Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆79Updated 11 months ago
- Official Code for NeurIPS 2022 Paper: How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders☆57Updated last year
- (Pytorch) Training ResNets on ImageNet-100 data☆54Updated 2 years ago
- ☆30Updated 2 years ago
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆63Updated 2 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆52Updated 3 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆37Updated 6 months ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆55Updated last year
- ☆60Updated last year
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆50Updated 2 years ago
- Official PyTorch implementation of PS-KD☆83Updated 2 years ago
- Official Implementation for PlugIn Inversion☆15Updated 3 years ago
- ☆11Updated 2 years ago
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆40Updated 2 years ago
- Recent Advances on Efficient Vision Transformers☆49Updated 2 years ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆103Updated last year
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated last year
- Denoising Masked Autoencoders Help Robust Classification.☆60Updated last year
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Z…☆125Updated 3 years ago
- [ICLR 2022]: Fast AdvProp☆34Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆46Updated 2 years ago
- [ICLR 2022] "Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice" by Peihao Wang, Wen…☆77Updated last year