marslanm / Multimodality-Representation-Learning
This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have been cited and discussed in the survey just accepted https://dl.acm.org/doi/abs/10.1145/3617833 .
☆73Updated last year
Alternatives and similar repositories for Multimodality-Representation-Learning:
Users that are interested in Multimodality-Representation-Learning are comparing it to the libraries listed below
- This repository contains code to evaluate various multimodal large language models using different instructions across multiple multimoda…☆27Updated last month
- A curated list of vision-and-language pre-training (VLP). :-)☆58Updated 2 years ago
- Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Work…☆52Updated 3 weeks ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆137Updated 10 months ago
- SimVLM ---SIMPLE VISUAL LANGUAGE MODEL PRETRAINING WITH WEAK SUPERVISION☆36Updated 2 years ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆50Updated 10 months ago
- ☆22Updated 9 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆146Updated last year
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆43Updated 2 years ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆37Updated last year
- Reading list for Multimodal Large Language Models☆68Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated last year
- MixGen: A New Multi-Modal Data Augmentation☆122Updated 2 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆59Updated 3 years ago
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆73Updated 5 months ago
- NAACL 2022: MCSE: Multimodal Contrastive Learning of Sentence Embeddings☆55Updated 10 months ago
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated last year
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 4 months ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆86Updated last year
- Code for paper 'Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity…☆12Updated last year
- Official repository for the A-OKVQA dataset☆84Updated last year
- InstructionGPT-4☆39Updated last year
- Official Repo for FoodieQA paper (EMNLP 2024)☆16Updated 5 months ago
- ☆75Updated last month
- Awesome List of Vision Language Prompt Papers☆46Updated last year
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆68Updated 9 months ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆74Updated last month
- [Paperlist] Awesome paper list of multimodal dialog, including methods, datasets and metrics☆39Updated 3 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆22Updated 11 months ago
- ☆102Updated 3 years ago