zzxslp / SoM-LLaVAView external linksLinks
[COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs
☆145Aug 23, 2024Updated last year
Alternatives and similar repositories for SoM-LLaVA
Users that are interested in SoM-LLaVA are comparing it to the libraries listed below
Sorting:
- [arXiv 2023] Set-of-Mark Prompting for GPT-4V and LMMs☆1,515Aug 19, 2024Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024 Best Paper]☆239Jan 3, 2026Updated last month
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆123Nov 25, 2024Updated last year
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆105Nov 9, 2023Updated 2 years ago
- [ECCV-24] This is the official implementation of the paper "SEGIC: Unleashing the Emergent Correspondence for In-Context Segmentation".☆27Oct 13, 2024Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆93Dec 1, 2025Updated 2 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆79Jun 17, 2024Updated last year
- When do we not need larger vision models?☆412Feb 8, 2025Updated last year
- ☆18Oct 28, 2025Updated 3 months ago
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆413Dec 20, 2025Updated last month
- Matryoshka Multimodal Models☆122Jan 22, 2025Updated last year
- Unifying Specialized Visual Encoders for Video Language Models☆25Nov 22, 2025Updated 2 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Jun 28, 2024Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- ☆72Jul 14, 2024Updated last year
- [ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"☆21Mar 26, 2025Updated 10 months ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆135Jul 17, 2024Updated last year
- ☆13Jul 10, 2024Updated last year
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆56Dec 28, 2024Updated last year
- Official repository for the paper PLLaVA☆676Jul 28, 2024Updated last year
- Official Implementation for "MyVLM: Personalizing VLMs for User-Specific Queries" (ECCV 2024)☆186Jul 5, 2024Updated last year
- Official implementation of paper VideoLLM Knows When to Speak: Enhancing Time-Sensitive Video Comprehension with Video-Text Duet Interact…☆42Feb 5, 2025Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆96Oct 19, 2024Updated last year
- [NeurIPS 2025] The official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tun…☆40Feb 20, 2025Updated 11 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Oct 14, 2024Updated last year
- official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]☆111Dec 4, 2025Updated 2 months ago
- LLaVA-Interactive-Demo☆380Jul 25, 2024Updated last year
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Aug 4, 2024Updated last year
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆129Aug 21, 2024Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆336Jul 17, 2024Updated last year
- A RLHF Infrastructure for Vision-Language Models☆196Nov 15, 2024Updated last year
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆159Sep 27, 2024Updated last year
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆443May 14, 2025Updated 9 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Mar 22, 2024Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆173Sep 25, 2024Updated last year
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆176Oct 6, 2025Updated 4 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆424Dec 22, 2024Updated last year
- ☆17Aug 1, 2024Updated last year
- A project for tri-modal LLM benchmarking and instruction tuning.☆56Mar 27, 2025Updated 10 months ago