FudanDISC / weakly-supervised-mVLP
Implementation of our ACL2023 paper: Unifying Cross-Lingual and Cross-Modal Modeling Towards Weakly Supervised Multilingual Vision-Language Pre-training
☆19Updated last year
Alternatives and similar repositories for weakly-supervised-mVLP:
Users that are interested in weakly-supervised-mVLP are comparing it to the libraries listed below
- Code for the ACL 2023 paper Scene Graph as Pivoting: Inference-time Image-free Unsupervised Multimodal Machine Translation with Visual Sc…☆13Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆69Updated 4 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆150Updated last year
- Paper, dataset and code list for multimodal dialogue.☆20Updated 2 months ago
- EMNLP 2023 Papers: Explore cutting-edge research from EMNLP 2023, the premier conference for advancing empirical methods in natural langu…☆105Updated 10 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆92Updated last year
- ☆43Updated 9 months ago
- ☆64Updated 9 months ago
- ☆34Updated last year
- ☆34Updated last year
- ☆68Updated 3 months ago
- [Paperlist] Awesome paper list of multimodal dialog, including methods, datasets and metrics☆39Updated 2 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆161Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆34Updated 11 months ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆50Updated 8 months ago
- Official Code for the WWW'24 Paper: "Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models"☆16Updated 10 months ago
- [ACL 2024] FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model☆14Updated 7 months ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆34Updated 3 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆44Updated 5 months ago
- MMoE: Multimodal Mixture-of-Experts (EMNLP 2024)☆11Updated 4 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆60Updated 3 years ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆137Updated 5 months ago
- pytorch implementation of mvp: a multi-stage vision-language pre-training framework☆11Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆136Updated last year
- ☆28Updated 9 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆20Updated 9 months ago
- Code for our EMNLP-2022 paper: "Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA"☆38Updated 2 years ago
- [Findings of ACL'2023] Improving Contrastive Learning of Sentence Embeddings from AI Feedback☆39Updated last year