Aman-4-Real / MMTGLinks
[ACM MM 2022]: Multi-Modal Experience Inspired AI Creation
☆20Updated 7 months ago
Alternatives and similar repositories for MMTG
Users that are interested in MMTG are comparing it to the libraries listed below
Sorting:
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- Paper, dataset and code list for multimodal dialogue.☆21Updated 6 months ago
- The official site of paper MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation☆198Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- [ACM MM 2024] See or Guess: Counterfactually Regularized Image Captioning☆14Updated 5 months ago
- Narrative movie understanding benchmark☆73Updated last month
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated last month
- Code, Models and Datasets for OpenViDial Dataset☆131Updated 3 years ago
- ☆70Updated last month
- IRFL: Image Recognition of Figurative Language☆11Updated last year
- Multimodal-Procedural-Planning☆92Updated 2 years ago
- ☆92Updated 2 years ago
- Danmuku dataset☆11Updated 2 years ago
- ☆11Updated 11 months ago
- ☆68Updated 2 years ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆24Updated last year
- [Paperlist] Awesome paper list of multimodal dialog, including methods, datasets and metrics☆39Updated 5 months ago
- ☆54Updated last year
- Code for ACL 2022 main conference paper "Neural Machine Translation with Phrase-Level Universal Visual Representations".☆21Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆155Updated 9 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆52Updated last year
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆161Updated 11 months ago
- ☆39Updated last year
- Data for evaluating GPT-4V☆11Updated last year
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)☆85Updated 3 years ago
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆102Updated 7 months ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago