bwconrad / vit-finetuneLinks
Fine-tuning Vision Transformers on various classification datasets
☆108Updated 9 months ago
Alternatives and similar repositories for vit-finetune
Users that are interested in vit-finetune are comparing it to the libraries listed below
Sorting:
- Awesome-Low-Rank-Adaptation☆104Updated 8 months ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆104Updated last year
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆58Updated last year
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆192Updated last year
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆35Updated 3 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆78Updated last year
- ☆55Updated 6 months ago
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆100Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆210Updated 6 months ago
- Open source implementation of "Vision Transformers Need Registers"☆182Updated 2 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆83Updated 7 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- [BMVC 2022] Official repository for "How to Train Vision Transformer on Small-scale Datasets?"☆155Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆163Updated last year
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆110Updated last year
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆189Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆298Updated 2 months ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆221Updated 10 months ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆470Updated 11 months ago
- A curated list of Model Merging methods.☆92Updated 9 months ago
- ☆181Updated 8 months ago
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆52Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆120Updated 2 months ago
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆102Updated 5 months ago
- [ICLR 2024] Improving Convergence and Generalization Using Parameter Symmetries☆29Updated last year
- Parameter Efficient Fine-tuning of Self-supervised ViTs without Catastrophic Forgetting☆28Updated last year
- [CVPR 2024] Friendly Sharpness-Aware Minimization☆33Updated 7 months ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆401Updated 9 months ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆132Updated 3 weeks ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆59Updated 8 months ago