jmiemirza / Meta-PromptingLinks
Meta-Prompting for Automating Zero-shot Visual Recognition with LLMs (ECCV 2024)
☆19Updated last year
Alternatives and similar repositories for Meta-Prompting
Users that are interested in Meta-Prompting are comparing it to the libraries listed below
Sorting:
- ☆53Updated 10 months ago
- Visual self-questioning for large vision-language assistant.☆45Updated 3 months ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆73Updated 2 years ago
- The efficient tuning method for VLMs☆80Updated last year
- This repository houses the code for the paper - "The Neglected of VLMs"☆29Updated 6 months ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆43Updated last year
- ☆44Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆59Updated last year
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆78Updated 5 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆77Updated 5 months ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆43Updated last year
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆38Updated 6 months ago
- This repository contains the code of our paper 'Skip \n: A simple method to reduce hallucination in Large Vision-Language Models'.☆14Updated last year
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆32Updated 4 months ago
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆54Updated 2 years ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆53Updated 7 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆37Updated 7 months ago
- [IJCV2025] https://arxiv.org/abs/2304.04521☆15Updated 10 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆49Updated last year
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆47Updated 2 years ago
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆44Updated last year
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆23Updated 6 months ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 9 months ago
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated last month
- ☆27Updated last year
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆29Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆50Updated last year