bwconrad / vit-finetuneLinks
Fine-tuning Vision Transformers on various classification datasets
☆109Updated last year
Alternatives and similar repositories for vit-finetune
Users that are interested in vit-finetune are comparing it to the libraries listed below
Sorting:
- Awesome-Low-Rank-Adaptation☆115Updated 10 months ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆105Updated 2 years ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆81Updated last year
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆60Updated 2 years ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆197Updated last year
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆480Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆226Updated last year
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆35Updated 3 weeks ago
- Official PyTorch implementation of DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs (ICML 2025 Oral)☆36Updated 2 months ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆306Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆313Updated 4 months ago
- Transformers trained on Tiny ImageNet☆56Updated 2 weeks ago
- ☆182Updated 11 months ago
- A curated list of Model Merging methods.☆92Updated 11 months ago
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆105Updated 8 months ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆406Updated 11 months ago
- ☆57Updated 8 months ago
- [BMVC 2022] Official repository for "How to Train Vision Transformer on Small-scale Datasets?"☆161Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆344Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆176Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆220Updated 8 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆105Updated last week
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- PyTorch implementation of LIMoE☆52Updated last year
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated last week
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆229Updated 2 years ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆67Updated 5 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆61Updated 2 years ago
- ☆148Updated 11 months ago