ggjy / vision_weak_to_strongLinks
☆38Updated last year
Alternatives and similar repositories for vision_weak_to_strong
Users that are interested in vision_weak_to_strong are comparing it to the libraries listed below
Sorting:
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆37Updated last year
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated last year
- ☆42Updated 7 months ago
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆20Updated 6 months ago
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆53Updated last month
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆27Updated last year
- Compress conventional Vision-Language Pre-training data☆51Updated last year
- [ECCV 2022] This repository includes the official implementation our paper "In Defense of Image Pre-Training for Spatiotemporal Recogniti…☆19Updated 2 years ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Updated 2 years ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆37Updated 2 months ago
- Paper List for In-context Learning 🌷☆20Updated 2 years ago
- repo for paper titled: Towards Realistic Zero-Shot Classification via Self Structural Semantic Alignment (AAAI'24 Oral)☆25Updated last year
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆19Updated 2 months ago
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆32Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆30Updated last year
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 3 months ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆40Updated last year
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆90Updated 2 years ago
- Official Repository of Personalized Visual Instruct Tuning☆29Updated 3 months ago
- This repository houses the code for the paper - "The Neglected of VLMs"☆28Updated last month
- Benchmarking Attention Mechanism in Vision Transformers.☆18Updated 2 years ago
- ☆11Updated 7 months ago
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆21Updated 2 years ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- [CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong C…☆25Updated 3 years ago
- Implementation for <Orthogonal Over-Parameterized Training> in CVPR'21.☆19Updated 3 years ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆41Updated 6 months ago
- Bag of Instances Aggregation Boosts Self-supervised Distillation (ICLR 2022)☆33Updated 3 years ago