OpenGVLab / Multitask-Model-SelectorLinks
[NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector
☆37Updated last year
Alternatives and similar repositories for Multitask-Model-Selector
Users that are interested in Multitask-Model-Selector are comparing it to the libraries listed below
Sorting:
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆85Updated 10 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆41Updated 8 months ago
- ☆38Updated last year
- Official repository for LLaVA-Reward (ICCV 2025): Multimodal LLMs as Customized Reward Models for Text-to-Image Generation☆14Updated last week
- [CVPR 2024 Highlight] ImageNet-D☆43Updated 9 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated 2 years ago
- ☆56Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆38Updated last year
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆74Updated 8 months ago
- ☆43Updated 9 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆57Updated 9 months ago
- The official implementation of ADDP (ICLR 2024)☆12Updated last year
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆44Updated 7 months ago
- The efficient tuning method for VLMs☆80Updated last year
- ☆23Updated last month
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆30Updated last month
- ☆51Updated 6 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆69Updated 3 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆49Updated 3 weeks ago
- Adapting LLaMA Decoder to Vision Transformer☆29Updated last year
- Training code for CLIP-FlanT5☆27Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 9 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 8 months ago
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆38Updated 3 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆100Updated 2 months ago
- ☆39Updated last year
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆37Updated 3 months ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆27Updated 9 months ago
- ☆45Updated 7 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated last month