MILVLG / prophetLinks
Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering".
☆274Updated 2 years ago
Alternatives and similar repositories for prophet
Users that are interested in prophet are comparing it to the libraries listed below
Sorting:
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆272Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆350Updated last year
- natual language guided image captioning☆85Updated last year
- Update 2020☆75Updated 3 years ago
- This is the official repository for Retrieval Augmented Visual Question Answering☆228Updated 5 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆207Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆280Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated last year
- The official repository of "Video assistant towards large language model makes everything easy"☆225Updated 5 months ago
- SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation☆110Updated last year
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆141Updated 10 months ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆280Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆520Updated last year
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆286Updated 3 months ago
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆162Updated 9 months ago
- mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)☆92Updated 2 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆95Updated 2 years ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆185Updated last year
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆297Updated last year
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)☆85Updated 3 years ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆151Updated 8 months ago
- SVIT: Scaling up Visual Instruction Tuning☆162Updated 11 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆314Updated last year
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆140Updated 11 months ago
- ☆778Updated 10 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆342Updated 4 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆261Updated 11 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆302Updated 4 months ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆262Updated 8 months ago