jmiemirza / Meta-PromptingLinks
Meta-Prompting for Automating Zero-shot Visual Recognition with LLMs (ECCV 2024)
☆19Updated last year
Alternatives and similar repositories for Meta-Prompting
Users that are interested in Meta-Prompting are comparing it to the libraries listed below
Sorting:
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆44Updated 9 months ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆75Updated 2 years ago
- ☆54Updated last year
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆44Updated last year
- The efficient tuning method for VLMs☆80Updated last year
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆44Updated 2 years ago
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆42Updated 3 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆39Updated 9 months ago
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆37Updated last year
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆50Updated 7 months ago
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆20Updated 3 weeks ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 11 months ago
- Visual self-questioning for large vision-language assistant.☆45Updated 6 months ago
- ☆27Updated 2 years ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆51Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆54Updated 9 months ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Updated last year
- ☆46Updated last year
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆44Updated last year
- [IJCV2025] https://arxiv.org/abs/2304.04521☆15Updated last year
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆22Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Updated last year
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆24Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆55Updated last year
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆31Updated 6 months ago
- Towards a Unified View on Visual Parameter-Efficient Transfer Learning☆26Updated 3 years ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆46Updated last year
- [CVPR 2023] Improving Zero-shot Generalization and Robustness of Multi-modal Models☆35Updated 2 years ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆21Updated last year