MILVLG / prophet
Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering".
☆263Updated last year
Related projects: ⓘ
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆321Updated 9 months ago
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆267Updated 11 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆244Updated 6 months ago
- Update 2020☆68Updated 2 years ago
- This is the official repository for Retrieval Augmented Visual Question Answering☆161Updated 2 weeks ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆166Updated 5 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆303Updated 2 months ago
- natual language guided image captioning☆74Updated 7 months ago
- Recent Advances in Vision and Language Pre-training (VLP)☆287Updated last year
- The official repository of "Video assistant towards large language model makes everything easy"☆199Updated 6 months ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆204Updated 7 months ago
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆216Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆441Updated last year
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆114Updated 2 months ago
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆278Updated 2 weeks ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆131Updated 3 weeks ago
- mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)☆81Updated last year
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆143Updated 2 weeks ago
- SVIT: Scaling up Visual Instruction Tuning☆159Updated 3 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆252Updated 3 weeks ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆205Updated 3 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆175Updated 8 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆445Updated 4 months ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆279Updated 8 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆218Updated last week
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆251Updated 8 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆296Updated 5 months ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆260Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆131Updated last year
- Official Repository of ChatCaptioner☆451Updated last year