Arnav0400 / ViT-Slim
Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”
☆249Updated last year
Alternatives and similar repositories for ViT-Slim:
Users that are interested in ViT-Slim are comparing it to the libraries listed below
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆101Updated 2 months ago
- ☆179Updated 5 months ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆177Updated last year
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆234Updated 2 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆94Updated last year
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆153Updated 2 years ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆249Updated last year
- ☆272Updated 2 years ago
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆227Updated last year
- [ICCV2023] Dataset Quantization☆258Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆257Updated 10 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆189Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 10 months ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆271Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆163Updated 9 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆309Updated last year
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆90Updated 2 years ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆243Updated last year
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆224Updated last month
- Implementation of "Attention Is Off By One" by Evan Miller☆190Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆213Updated 2 years ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆270Updated 11 months ago
- ☆100Updated 8 months ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆178Updated last year
- ☆189Updated last year
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆232Updated last year
- ☆145Updated last year
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.☆32Updated 2 months ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆181Updated 2 years ago