bwconrad / vit-finetuneLinks
Fine-tuning Vision Transformers on various classification datasets
☆109Updated last year
Alternatives and similar repositories for vit-finetune
Users that are interested in vit-finetune are comparing it to the libraries listed below
Sorting:
- Awesome-Low-Rank-Adaptation☆116Updated 11 months ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆484Updated last year
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆62Updated 2 years ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆198Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆83Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆406Updated 11 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆320Updated 5 months ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆104Updated 2 years ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆89Updated 10 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆350Updated 2 years ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆306Updated last year
- A curated list of Model Merging methods.☆92Updated last year
- ☆58Updated 9 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆34Updated last month
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆222Updated 9 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆69Updated 6 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆251Updated 3 weeks ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆131Updated 5 months ago
- ☆183Updated 11 months ago
- [BMVC 2022] Official repository for "How to Train Vision Transformer on Small-scale Datasets?"☆162Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆179Updated last year
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆105Updated 8 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆226Updated last year
- ☆190Updated last year
- Editing Models with Task Arithmetic☆500Updated last year
- Transformers trained on Tiny ImageNet☆57Updated last month
- PyTorch implementation of LIMoE☆52Updated last year
- Compare neural networks by their feature similarity☆374Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Official PyTorch implementation of DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs (ICML 2025 Oral)☆39Updated 2 months ago