bwconrad / vit-finetune
Fine-tuning Vision Transformers on various classification datasets
☆106Updated 6 months ago
Alternatives and similar repositories for vit-finetune:
Users that are interested in vit-finetune are comparing it to the libraries listed below
- Awesome-Low-Rank-Adaptation☆83Updated 5 months ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆104Updated last year
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆51Updated last year
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆72Updated last year
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆31Updated 2 weeks ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆52Updated 3 weeks ago
- Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"☆110Updated last year
- ☆48Updated 3 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆216Updated 9 months ago
- Open source implementation of "Vision Transformers Need Registers"☆168Updated last month
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆179Updated last year
- [CVPR 2024] Friendly Sharpness-Aware Minimization☆28Updated 4 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆70Updated 4 months ago
- Parameter Efficient Fine-tuning of Self-supervised ViTs without Catastrophic Forgetting☆23Updated 9 months ago
- [BMVC 2022] Official repository for "How to Train Vision Transformer on Small-scale Datasets?"☆149Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆150Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆175Updated 3 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆44Updated 2 weeks ago
- source code of (quasi-)Givens Orthogonal Fine Tuning integrated to peft lib☆14Updated last week
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆177Updated last year
- source code for NeurIPS'23 paper "Dream the Impossible: Outlier Imagination with Diffusion Models"☆66Updated 2 months ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆53Updated last year
- PyTorch implementation of LIMoE☆53Updated 11 months ago
- ☆179Updated 5 months ago
- A curated list of Model Merging methods.☆91Updated 6 months ago
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆101Updated 2 months ago
- ☆100Updated 8 months ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆125Updated 4 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆71Updated last year
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆80Updated last year