OpenGVLab / Multitask-Model-SelectorLinks
[NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector
☆37Updated last year
Alternatives and similar repositories for Multitask-Model-Selector
Users that are interested in Multitask-Model-Selector are comparing it to the libraries listed below
Sorting:
- [CVPR 2024 Highlight] ImageNet-D☆44Updated last year
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆89Updated last year
- Empowering Small VLMs to Think with Dynamic Memorization and Exploration☆15Updated last month
- [NeurIPS 2025] Unsupervised Post-Training for Multi-Modal LLM Reasoning via GRPO☆60Updated 3 weeks ago
- ☆57Updated 2 years ago
- ☆44Updated last year
- ☆37Updated last year
- ☆24Updated 5 months ago
- VisualToolAgent (VisTA): A Reinforcement Learning Framework for Visual Tool Selection☆20Updated 5 months ago
- ☆53Updated 10 months ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆23Updated 6 months ago
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆78Updated 5 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated 2 years ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆29Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆59Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆45Updated 11 months ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆38Updated 6 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆50Updated last year
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆46Updated 10 months ago
- The efficient tuning method for VLMs☆80Updated last year
- Official repository for LLaVA-Reward (ICCV 2025): Multimodal LLMs as Customized Reward Models for Text-to-Image Generation☆21Updated 3 months ago
- (ICLR 2025 Spotlight) DEEM: Official implementation of Diffusion models serve as the eyes of large language models for image perception.☆44Updated 4 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆59Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆20Updated 6 months ago
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆32Updated 4 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆48Updated last year
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆106Updated last year
- The official implementation of 《MLLMs-Augmented Visual-Language Representation Learning》☆31Updated last year
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆15Updated 11 months ago