markywg / transagentLinks
[NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration
☆24Updated 10 months ago
Alternatives and similar repositories for transagent
Users that are interested in transagent are comparing it to the libraries listed below
Sorting:
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆121Updated 3 weeks ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆55Updated last month
- [ICCV 2025] Dynamic-VLM☆24Updated 8 months ago
- Official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning"☆36Updated 6 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆56Updated last month
- Official implement of MIA-DPO☆64Updated 7 months ago
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆36Updated 2 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆78Updated last month
- [EMNLP-2025] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆49Updated last week
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆47Updated last month
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆57Updated 9 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 6 months ago
- ☆52Updated 7 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆85Updated 11 months ago
- ☆119Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆85Updated 2 months ago
- ☆37Updated 3 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 11 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆38Updated last month
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆27Updated 3 weeks ago
- ☆45Updated 7 months ago
- ☆87Updated 2 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆37Updated 2 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆45Updated 2 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆68Updated last month
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 5 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 5 months ago
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆51Updated last month
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆102Updated 2 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆30Updated 10 months ago