PhoebusSi / Thinking-while-Observing
Code for our ACL-2023 paper: "Combo of Thinking and Observing for Outside-Knowledge VQA"
☆11Updated last year
Related projects: ⓘ
- ☆10Updated this week
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆29Updated last year
- Code for our EMNLP-2022 paper: "Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning"☆12Updated last year
- ☆97Updated 2 years ago
- [Paper][IJCKG 2022] LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text Injection☆25Updated 7 months ago
- a multimodal retrieval dataset☆21Updated last year
- Official implementation for the MM'22 paper.☆11Updated 2 years ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆19Updated last month
- my commonly-used tools☆46Updated last month
- Code for our EMNLP-2022 paper: "Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA"☆35Updated last year
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated 10 months ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆25Updated 8 months ago
- ☆13Updated 10 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆28Updated 5 months ago
- ☆49Updated last year
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆40Updated 2 years ago
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆22Updated last year
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆28Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆15Updated 3 months ago
- Implementation for the paper "Unified Multimodal Model with Unlikelihood Training for Visual Dialog"☆13Updated last year
- An automatic MLLM hallucination detection framework☆17Updated 11 months ago
- This repository contains code to evaluate various multimodal large language models using different instructions across multiple multimoda…☆24Updated 4 months ago
- Visual and Embodied Concepts evaluation benchmark☆21Updated 11 months ago
- Official code for our paper "Model Composition for Multimodal Large Language Models"☆15Updated 4 months ago
- ☆31Updated 3 months ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆55Updated 2 years ago
- [NeurIPS 2022] Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings☆20Updated last year
- ☆26Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆25Updated last month
- ☆21Updated last year