QQ-MM / PureMMLinks
☆21Updated last year
Alternatives and similar repositories for PureMM
Users that are interested in PureMM are comparing it to the libraries listed below
Sorting:
- A collection of visual instruction tuning datasets.☆76Updated last year
- ☆133Updated last year
- ☆66Updated last year
- ☆91Updated last year
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- Official repository of MMDU dataset☆93Updated 10 months ago
- ☆31Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆163Updated last year
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆69Updated 5 months ago
- Lion: Kindling Vision Intelligence within Large Language Models☆52Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- ☆152Updated 9 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆267Updated last year
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆33Updated 2 months ago
- ☆37Updated last year
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆150Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆48Updated 5 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- ☆87Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆116Updated 8 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- ☆100Updated last year
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆91Updated 6 months ago
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆113Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 8 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 10 months ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆55Updated 7 months ago
- ☆119Updated last year
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆46Updated last year