Arnav0400 / ViT-Slim
Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”
☆247Updated last year
Alternatives and similar repositories for ViT-Slim:
Users that are interested in ViT-Slim are comparing it to the libraries listed below
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆101Updated 2 weeks ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆176Updated last year
- [ICCV2023] Dataset Quantization☆256Updated last year
- ☆271Updated 2 years ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆246Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆255Updated 8 months ago
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆235Updated last year
- ☆171Updated 3 months ago
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆152Updated 2 years ago
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆207Updated 2 months ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆291Updated 11 months ago
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆226Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆256Updated 8 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆196Updated 7 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆233Updated 11 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 8 months ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆92Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆264Updated last year
- When do we not need larger vision models?☆354Updated last month
- Official code for "TOAST: Transfer Learning via Attention Steering"☆186Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆212Updated 2 years ago
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆50Updated 2 years ago
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆91Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆200Updated 10 months ago
- ☆91Updated 6 months ago
- ☆45Updated last month
- ☆114Updated 7 months ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆158Updated last year
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆109Updated 8 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆59Updated 3 months ago