bwconrad / vit-finetune
Fine-tuning Vision Transformers on various classification datasets
☆106Updated 6 months ago
Alternatives and similar repositories for vit-finetune:
Users that are interested in vit-finetune are comparing it to the libraries listed below
- Awesome-Low-Rank-Adaptation☆83Updated 5 months ago
- [CVPR 2024] Friendly Sharpness-Aware Minimization☆27Updated 4 months ago
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆51Updated last year
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆30Updated last week
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆72Updated last year
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆104Updated last year
- A curated list of Model Merging methods.☆90Updated 5 months ago
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆179Updated last year
- ☆47Updated 2 months ago
- Low rank adaptation for Vision Transformer☆392Updated 11 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆71Updated last year
- [BMVC 2022] Official repository for "How to Train Vision Transformer on Small-scale Datasets?"☆148Updated last year
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆186Updated 10 months ago
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆101Updated 2 months ago
- ☆35Updated 2 years ago
- PyTorch implementation of LIMoE☆53Updated 11 months ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆176Updated last year
- ☆177Updated 5 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆67Updated 4 months ago
- Open source implementation of "Vision Transformers Need Registers"☆168Updated last month
- Transformers trained on Tiny ImageNet☆53Updated 2 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆52Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆148Updated last year
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆91Updated 3 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆399Updated 5 months ago
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆110Updated last year
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆75Updated 3 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆49Updated 2 weeks ago
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆98Updated last year