Yushi-Hu / PromptCap
natual language guided image captioning
☆74Updated 7 months ago
Related projects: ⓘ
- Official repository for the A-OKVQA dataset☆59Updated 4 months ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆120Updated last month
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆166Updated 5 months ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆46Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆131Updated last year
- SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation☆88Updated 7 months ago
- Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for …☆60Updated last year
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆34Updated last month
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆131Updated 3 weeks ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆41Updated 2 months ago
- ☆80Updated 2 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆127Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated 9 months ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆112Updated 2 years ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆128Updated 4 months ago
- SVIT: Scaling up Visual Instruction Tuning☆159Updated 3 months ago
- Implementation for CVPR 2022 paper " Injecting Semantic Concepts into End-to-End Image Captionin".☆41Updated 2 years ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆49Updated 2 months ago
- ☆110Updated 4 months ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆202Updated last year
- Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answer…☆51Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆37Updated 11 months ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆85Updated last year
- [ICCV 2021] Official implementation of the paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering"☆64Updated 2 years ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆175Updated 8 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆27Updated 5 months ago
- [CVPR 2022] A large-scale public benchmark dataset for video question-answering, especially about evidence and commonsense reasoning. The…☆50Updated 2 months ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆25Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆67Updated 5 months ago
- ☆57Updated 5 years ago