yuecao0119 / MMInstructLinks
[SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity". The MMInstruct dataset includes 973K instructions from 24 domains and four instruction types.
☆55Updated 8 months ago
Alternatives and similar repositories for MMInstruct
Users that are interested in MMInstruct are comparing it to the libraries listed below
Sorting:
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆79Updated 5 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆50Updated 7 months ago
- ☆83Updated 6 months ago
- ☆17Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆143Updated 8 months ago
- ☆50Updated last year
- ☆45Updated 6 months ago
- ☆115Updated 11 months ago
- Preference Learning for LLaVA☆46Updated 8 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆58Updated 9 months ago
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆15Updated 7 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 8 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆68Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated 11 months ago
- ☆91Updated last year
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆29Updated 9 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆64Updated last month
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆35Updated 3 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆56Updated 8 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆124Updated 2 months ago
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆46Updated last month
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆124Updated last month
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆76Updated last year
- [ICCV 2025] Dynamic-VLM☆21Updated 6 months ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆42Updated this week
- Official implement of MIA-DPO☆59Updated 5 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆81Updated 10 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆160Updated 7 months ago