ShareGPT4Omni / ShareGPT4VView external linksLinks
[ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions
☆248Jul 1, 2024Updated last year
Alternatives and similar repositories for ShareGPT4V
Users that are interested in ShareGPT4V are comparing it to the libraries listed below
Sorting:
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- (ICLR 2026)Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆58Jan 26, 2026Updated 3 weeks ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆203Sep 26, 2024Updated last year
- ☆4,562Sep 14, 2025Updated 5 months ago
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆111Jul 9, 2025Updated 7 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 9 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆412May 5, 2025Updated 9 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆889Aug 13, 2024Updated last year
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,919May 26, 2025Updated 8 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆137Dec 31, 2023Updated 2 years ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,876Jan 8, 2026Updated last month
- A Survey on Benchmarks of Multimodal Large Language Models☆148Jul 1, 2025Updated 7 months ago
- [NeurIPS 2024] An official implementation of "ShareGPT4Video: Improving Video Understanding and Generation with Better Captions"☆1,085Oct 9, 2024Updated last year
- ☆124Jul 29, 2024Updated last year
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆149Jun 13, 2024Updated last year
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,816Updated this week
- This project is the official implementation of 'DreamOmni3: Scribble-based Editing and Generation''☆37Dec 30, 2025Updated last month
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,932Aug 15, 2024Updated last year
- [ACM MM25] Official Pytorch implementation of [Decoupled Global-Local Alignment for Improving Compositional Understanding]☆15Jul 15, 2025Updated 7 months ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆19Nov 4, 2025Updated 3 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆336Jul 17, 2024Updated last year
- [NeurIPS 2025 Spotlight] A Unified Tokenizer for Visual Generation and Understanding☆508Nov 14, 2025Updated 3 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,986Nov 7, 2025Updated 3 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆55Oct 10, 2024Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆30May 20, 2024Updated last year
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆88Sep 23, 2025Updated 4 months ago
- [CVPR 2025] 🔥 Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".☆436Aug 8, 2025Updated 6 months ago
- A fork to add multimodal model training to open-r1☆1,474Feb 8, 2025Updated last year
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆299Jan 23, 2025Updated last year
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆3,684Updated this week
- EVA Series: Visual Representation Fantasies from BAAI☆2,648Aug 1, 2024Updated last year
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆673Oct 25, 2024Updated last year
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆799Oct 10, 2025Updated 4 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆176Oct 6, 2025Updated 4 months ago
- Code release for Ming-UniVision: Joint Image Understanding and Geneation with a Continuous Unified Tokenizer☆136Oct 14, 2025Updated 4 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆159Sep 27, 2025Updated 4 months ago
- When do we not need larger vision models?☆412Feb 8, 2025Updated last year
- VisionLLM Series☆1,137Feb 27, 2025Updated 11 months ago
- [ICLR'25] Official repository of paper: Ranking-aware adapter for text-driven image ordering with CLIP☆16Apr 17, 2025Updated 10 months ago