bwconrad / vit-finetuneLinks
Fine-tuning Vision Transformers on various classification datasets
☆112Updated last year
Alternatives and similar repositories for vit-finetune
Users that are interested in vit-finetune are comparing it to the libraries listed below
Sorting:
- Awesome-Low-Rank-Adaptation☆122Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Updated last year
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆45Updated 3 months ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆308Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆336Updated 7 months ago
- ☆186Updated last year
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆200Updated last year
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆66Updated 2 years ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆497Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆186Updated last year
- A curated list of Model Merging methods.☆92Updated last year
- ☆61Updated 11 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated 2 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆361Updated 2 years ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆72Updated 8 months ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆410Updated last year
- Editing Models with Task Arithmetic☆511Updated last year
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆104Updated 2 years ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆96Updated last year
- Compare neural networks by their feature similarity☆375Updated 2 years ago
- ☆198Updated last year
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers☆105Updated 10 months ago
- Transformers trained on Tiny ImageNet☆58Updated 3 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆186Updated 2 years ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆231Updated 11 months ago
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆94Updated 4 years ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆65Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆135Updated 7 months ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆81Updated 8 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆107Updated 2 years ago