ehuynh1106 / TinyImageNet-Transformers
Transformers trained on Tiny ImageNet
☆54Updated 2 years ago
Alternatives and similar repositories for TinyImageNet-Transformers
Users that are interested in TinyImageNet-Transformers are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- (Pytorch) Training ResNets on ImageNet-100 data☆57Updated 3 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- ☆35Updated 2 years ago
- ☆30Updated 3 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆69Updated 2 years ago
- ☆58Updated 2 years ago
- ☆17Updated 2 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆51Updated 3 years ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆184Updated last year
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆90Updated 4 years ago
- This repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".☆45Updated 2 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆44Updated last year
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆40Updated 2 years ago
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆65Updated 2 years ago
- ☆63Updated last year
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 2 years ago
- ☆23Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆106Updated last year
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆105Updated last year
- Implementation of HAT https://arxiv.org/pdf/2204.00993☆50Updated last year
- ☆114Updated last year
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- [BMVC 2022] Official repository for "How to Train Vision Transformer on Small-scale Datasets?"☆152Updated last year
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- Official PyTorch implementation of PS-KD☆87Updated 2 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆36Updated 10 months ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- Denoising Masked Autoencoders Help Robust Classification.☆62Updated last year