bwconrad / vit-finetuneLinks
Fine-tuning Vision Transformers on various classification datasets
☆110Updated 11 months ago
Alternatives and similar repositories for vit-finetune
Users that are interested in vit-finetune are comparing it to the libraries listed below
Sorting:
- Awesome-Low-Rank-Adaptation☆115Updated 9 months ago
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆60Updated last year
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆477Updated last year
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆197Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆402Updated 10 months ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆105Updated 2 years ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆309Updated 4 months ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆304Updated last year
- [BMVC 2022] Official repository for "How to Train Vision Transformer on Small-scale Datasets?"☆160Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆81Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆88Updated 9 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆127Updated 4 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆220Updated 8 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆174Updated last year
- ☆57Updated 7 months ago
- A curated list of Model Merging methods.☆92Updated 10 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆35Updated last week
- Compare neural networks by their feature similarity☆369Updated 2 years ago
- Transformers trained on Tiny ImageNet☆55Updated 3 years ago
- ☆182Updated 10 months ago
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆105Updated 7 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆226Updated last year
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆227Updated last year
- Low rank adaptation for Vision Transformer☆418Updated last year
- Editing Models with Task Arithmetic☆490Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆103Updated 2 years ago
- SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22☆28Updated 2 years ago
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆111Updated last year
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆101Updated last year
- PyTorch Reimplementation of LoRA (featuring with supporting nn.MultiheadAttention in OpenCLIP)☆67Updated 2 months ago