marslanm / Multimodality-Representation-Learning
This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have been cited and discussed in the survey just accepted https://dl.acm.org/doi/abs/10.1145/3617833 .
☆71Updated last year
Alternatives and similar repositories for Multimodality-Representation-Learning:
Users that are interested in Multimodality-Representation-Learning are comparing it to the libraries listed below
- A curated list of vision-and-language pre-training (VLP). :-)☆57Updated 2 years ago
- Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Work…☆47Updated 2 years ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆128Updated 7 months ago
- SimVLM ---SIMPLE VISUAL LANGUAGE MODEL PRETRAINING WITH WEAK SUPERVISION☆36Updated 2 years ago
- [Paperlist] Awesome paper list of multimodal dialog, including methods, datasets and metrics☆39Updated 3 weeks ago
- Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning" (published at ICLR 202…☆58Updated last year
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆44Updated 2 years ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated last month
- [ICCVW 2023] - Mapping Memes to Words for Multimodal Hateful Meme Classification☆24Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆141Updated 9 months ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆50Updated 7 months ago
- InstructionGPT-4☆38Updated last year
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆59Updated 2 years ago
- ☆22Updated 6 months ago
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆94Updated 5 months ago
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆39Updated last year
- MixGen: A New Multi-Modal Data Augmentation☆120Updated 2 years ago
- code for studying OpenAI's CLIP explainability☆28Updated 3 years ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆144Updated 2 years ago
- [CVPR'24 Highlight] Implementation of "Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models"☆13Updated 5 months ago
- Recent Advances in Vision and Language Pre-training (VLP)☆291Updated last year
- Official Repo for FoodieQA paper (EMNLP 2024)☆15Updated 2 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆78Updated 9 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆204Updated 2 years ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆153Updated last year
- PyTorch implementation of LIMoE☆53Updated 10 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated last year
- ☆38Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆92Updated last year