LHL3341 / ContextBLIP
☆11Updated 9 months ago
Alternatives and similar repositories for ContextBLIP:
Users that are interested in ContextBLIP are comparing it to the libraries listed below
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆22Updated 7 months ago
- ☆35Updated 2 years ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆38Updated 2 months ago
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆67Updated last month
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆76Updated 10 months ago
- A simple pytorch implementation of baseline based-on CLIP for Image-text Matching.☆13Updated last year
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆43Updated 6 months ago
- The official implementation of paper "Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval" accepted by NeurIPS…☆23Updated 9 months ago
- Composed Video Retrieval☆50Updated 10 months ago
- Official pytorch implementation of "RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language…☆10Updated 2 months ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆40Updated 7 months ago
- ☆22Updated last year
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆38Updated 11 months ago
- ☆14Updated last year
- [TPAMI 2024] This is the Pytorch code for our paper "Context Disentangling and Prototype Inheriting for Robust Visual Grounding".☆17Updated 4 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆32Updated 11 months ago
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆19Updated 2 months ago
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆47Updated 10 months ago
- Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs".☆45Updated 6 months ago
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆33Updated 11 months ago
- A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆41Updated 2 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆45Updated 7 months ago
- Context-I2W: Mapping Images to Context-dependent words for Accurate Zero-Shot Composed Image Retrieval [AAAI 2024 Oral]☆46Updated 3 months ago
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆35Updated 11 months ago
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆49Updated 6 months ago
- ☆20Updated 10 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆52Updated last month