Aman-4-Real / MMTGLinks
[ACM MM 2022]: Multi-Modal Experience Inspired AI Creation
☆20Updated 6 months ago
Alternatives and similar repositories for MMTG
Users that are interested in MMTG are comparing it to the libraries listed below
Sorting:
- Paper, dataset and code list for multimodal dialogue.☆20Updated 5 months ago
- [ACM MM 2024] See or Guess: Counterfactually Regularized Image Captioning☆14Updated 3 months ago
- [ACL2023] VSTAR is a multimodal dialogue dataset with scene and topic transition information☆12Updated 7 months ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆46Updated last year
- Data for evaluating GPT-4V☆11Updated last year
- Narrative movie understanding benchmark☆71Updated last year
- ☆69Updated last week
- Danmuku dataset☆11Updated last year
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated 10 months ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated last year
- The official site of paper MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation☆197Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- ☆49Updated 11 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆51Updated last year
- ☆90Updated 2 years ago
- DSTC10 Track1 - MOD: Internet Meme Incorporated Open-domain Dialog☆50Updated 2 years ago
- ☆21Updated last year
- ☆14Updated last year
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆108Updated this week
- ☆38Updated last year
- Code for ACL 2022 main conference paper "Neural Machine Translation with Phrase-Level Universal Visual Representations".☆21Updated last year
- Code, Models and Datasets for OpenViDial Dataset☆131Updated 3 years ago
- a multimodal retrieval dataset☆22Updated last year
- ☆24Updated 3 years ago
- ☆49Updated last year
- ☆18Updated 10 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 2 months ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆42Updated 2 weeks ago