ggjy / vision_weak_to_strong
☆36Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for vision_weak_to_strong
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆32Updated 5 months ago
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆31Updated last year
- Officail Repo of γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆19Updated 3 weeks ago
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- Paper List for In-context Learning 🌷☆20Updated last year
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 2 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆23Updated 10 months ago
- Code for T-MARS data filtering☆35Updated last year
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆14Updated last month
- Benchmarking Attention Mechanism in Vision Transformers.☆16Updated 2 years ago
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆91Updated last year
- ☆52Updated last year
- Official Repository of Personalized Visual Instruct Tuning☆24Updated 2 weeks ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆36Updated last year
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆17Updated 2 months ago
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated last year
- Dataset pruning for ImageNet and LAION-2B.☆69Updated 4 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆22Updated 2 weeks ago
- [ICLR2024] Exploring Target Representations for Masked Autoencoders☆51Updated 10 months ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆37Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model☆16Updated 7 months ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Updated last year
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆64Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆14Updated 8 months ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆19Updated 3 weeks ago
- [CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong C…☆25Updated 2 years ago
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆20Updated 2 years ago
- PyTorch implementation of "UNIT: Unifying Image and Text Recognition in One Vision Encoder", NeurlPS 2024.☆20Updated last month