mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)
☆228Jul 21, 2023Updated 2 years ago
Alternatives and similar repositories for mPLUG-2
Users that are interested in mPLUG-2 are comparing it to the libraries listed below
Sorting:
- mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)☆97May 8, 2023Updated 2 years ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆304Jan 8, 2024Updated 2 years ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,539Apr 2, 2025Updated 11 months ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆306Dec 25, 2024Updated last year
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆298Mar 14, 2024Updated last year
- A Chinese Open-Domain Dialogue System☆326Aug 16, 2023Updated 2 years ago
- ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab☆2,051Mar 19, 2024Updated last year
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆281Mar 25, 2023Updated 2 years ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆97Jan 29, 2024Updated 2 years ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,201Dec 15, 2025Updated 2 months ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,124Jun 4, 2024Updated last year
- ☆19Dec 22, 2022Updated 3 years ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆347May 27, 2024Updated last year
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Nov 29, 2023Updated 2 years ago
- Supercharged BLIP-2 that can handle videos☆123Dec 1, 2023Updated 2 years ago
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,335Jan 18, 2025Updated last year
- [ICCV 2023] Accurate and Fast Compressed Video Captioning☆52Jul 28, 2025Updated 7 months ago
- Official Implementation of ISR-DPO:Aligning Large Multimodal Models for Videos by Iterative Self-Retrospective DPO (AAAI'25)☆23Nov 25, 2025Updated 3 months ago
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆203Nov 13, 2023Updated 2 years ago
- Turning to Video for Transcript Sorting☆49Aug 27, 2023Updated 2 years ago
- [AAAI 2023 Oral] VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning☆68Feb 16, 2024Updated 2 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,450Dec 3, 2024Updated last year
- All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment☆19Feb 11, 2025Updated last year
- DA-AIM: Exploiting Instance-based Mixed Sampling via Auxiliary Source Domain Supervision for Domain-adaptive Action Detection☆12Oct 6, 2022Updated 3 years ago
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆511Nov 18, 2025Updated 3 months ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆58Aug 19, 2025Updated 6 months ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Jun 20, 2023Updated 2 years ago
- ☆26Mar 20, 2023Updated 2 years ago
- This repository contains the dataset, codebase, and benchmarks for our paper: <CNVid-3.5M: Build, Filter, and Pre-train the Large-scale P…☆25Nov 28, 2023Updated 2 years ago
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆256Nov 29, 2024Updated last year
- ☆13Jul 20, 2024Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆213Feb 27, 2024Updated 2 years ago
- [CVPRW'23 Best Paper Award] Zero-shot Unsupervised Transfer Instance Segmentation☆24Aug 22, 2023Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,681Aug 5, 2024Updated last year
- ☆14Jul 15, 2024Updated last year
- The implementation of a paper entitled "Action Knowledge for Video Captioning with Graph Neural Networks" (JKSUCIS 2023).☆14Mar 29, 2023Updated 2 years ago
- Fully Open Framework for Democratized Multimodal Reinforcement Learning.☆43Dec 19, 2025Updated 2 months ago