Aman-4-Real / MMTGLinks
[ACM MM 2022]: Multi-Modal Experience Inspired AI Creation
☆21Updated 11 months ago
Alternatives and similar repositories for MMTG
Users that are interested in MMTG are comparing it to the libraries listed below
Sorting:
- Paper, dataset and code list for multimodal dialogue.☆22Updated 9 months ago
- The official site of paper MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation☆201Updated 2 years ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆46Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 3 years ago
- Code for ACL 2022 main conference paper "Neural Machine Translation with Phrase-Level Universal Visual Representations".☆21Updated 2 years ago
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated last year
- Visual and Embodied Concepts evaluation benchmark☆21Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- Narrative movie understanding benchmark☆76Updated 4 months ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- ☆69Updated 5 months ago
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆124Updated 4 months ago
- [ACM MM 2024] See or Guess: Counterfactually Regularized Image Captioning☆14Updated 8 months ago
- Danmuku dataset☆11Updated 2 years ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆130Updated 2 years ago
- ☆98Updated 3 years ago
- Data for evaluating GPT-4V☆11Updated 2 years ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆189Updated last year
- [ACL 2023] VSTAR is a multimodal dialogue dataset with scene and topic transition information☆15Updated last year