Aman-4-Real / MMTGLinks
[ACM MM 2022]: Multi-Modal Experience Inspired AI Creation
☆21Updated 11 months ago
Alternatives and similar repositories for MMTG
Users that are interested in MMTG are comparing it to the libraries listed below
Sorting:
- The official site of paper MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation☆202Updated 2 years ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated 2 years ago
- Paper, dataset and code list for multimodal dialogue.☆22Updated 10 months ago
- [ACM MM 2024] See or Guess: Counterfactually Regularized Image Captioning☆14Updated 9 months ago
- ☆70Updated 5 months ago
- Narrative movie understanding benchmark☆76Updated 5 months ago
- Data for evaluating GPT-4V☆11Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆133Updated 2 years ago
- Danmuku dataset☆11Updated 2 years ago
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated last year
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated 5 months ago
- [ACL 2023] VSTAR is a multimodal dialogue dataset with scene and topic transition information☆15Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 3 years ago
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆125Updated 5 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- IRFL: Image Recognition of Figurative Language☆11Updated last year
- PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".☆24Updated 4 years ago
- Code for ACL 2022 main conference paper "Neural Machine Translation with Phrase-Level Universal Visual Representations".☆21Updated 2 years ago
- TL;DR: We propose a large-scale cross-domain persuasion dataset covers 13,000 scenarios in 35 domains, with the developed PersuGPT model …☆16Updated 9 months ago
- [Paperlist] Awesome paper list of multimodal dialog, including methods, datasets and metrics☆37Updated 9 months ago
- ☆155Updated last year
- ☆59Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- my commonly-used tools☆63Updated 10 months ago
- ☆67Updated 2 years ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆125Updated 7 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆54Updated 2 years ago
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆131Updated 2 years ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆50Updated 8 months ago