Aman-4-Real / MMTGLinks
[ACM MM 2022]: Multi-Modal Experience Inspired AI Creation
☆20Updated 8 months ago
Alternatives and similar repositories for MMTG
Users that are interested in MMTG are comparing it to the libraries listed below
Sorting:
- The official site of paper MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation☆199Updated last year
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- [ACM MM 2024] See or Guess: Counterfactually Regularized Image Captioning☆14Updated 5 months ago
- Paper, dataset and code list for multimodal dialogue.☆21Updated 7 months ago
- Narrative movie understanding benchmark☆74Updated 2 months ago
- ☆70Updated 2 months ago
- Danmuku dataset☆11Updated 2 years ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆53Updated last year
- [ACL 2023] VSTAR is a multimodal dialogue dataset with scene and topic transition information☆14Updated 9 months ago
- ☆54Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆158Updated 10 months ago
- Code for ACL 2022 main conference paper "Neural Machine Translation with Phrase-Level Universal Visual Representations".☆21Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated 2 months ago
- ☆18Updated last year
- Data for evaluating GPT-4V☆11Updated last year
- ☆39Updated last year
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆117Updated last month
- ☆152Updated 9 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)☆96Updated 2 years ago
- Self-hosted GPT-4V api☆30Updated last year
- Official repository for the A-OKVQA dataset☆96Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆48Updated 5 months ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- ☆78Updated last year