jihaonew / MM-InstructLinks
MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment
☆35Updated last year
Alternatives and similar repositories for MM-Instruct
Users that are interested in MM-Instruct are comparing it to the libraries listed below
Sorting:
- ☆75Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- ☆50Updated 2 years ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆72Updated last year
- [ICCV 2025] Dynamic-VLM☆26Updated 11 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆84Updated 10 months ago
- ☆102Updated 10 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆87Updated 3 months ago
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆29Updated last year
- ☆66Updated last year
- Official repo for StableLLAVA☆95Updated last year
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆58Updated 7 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆60Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆62Updated 4 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated 3 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated last year
- Multimodal RewardBench☆55Updated 9 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆38Updated last month
- ☆133Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated 2 years ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆58Updated 11 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 9 months ago
- [EMNLP 2025] Distill Visual Chart Reasoning Ability from LLMs to MLLMs☆57Updated 3 months ago
- ☆94Updated 5 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆71Updated last year
- OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe☆111Updated this week
- ☆100Updated last year
- ☆62Updated 3 months ago
- ☆46Updated 11 months ago