inquire-benchmark / INQUIRELinks
This repo contains the evaluation code for the INQUIRE benchmark
☆53Updated 7 months ago
Alternatives and similar repositories for INQUIRE
Users that are interested in INQUIRE are comparing it to the libraries listed below
Sorting:
- This is the repository for the BioCLIP model and the TreeOfLife-10M dataset [CVPR'24 Oral, Best Student Paper].☆214Updated last month
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆120Updated last year
- Official repository of paper "Subobject-level Image Tokenization" (ICML-25)☆80Updated last month
- Official This-Is-My Dataset published in CVPR 2023☆16Updated last year
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆55Updated 10 months ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆91Updated last year
- ☆51Updated 4 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆29Updated 11 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆28Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆33Updated 2 years ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆127Updated 4 months ago
- An open source implementation of CLIP (With TULIP Support)☆162Updated 2 months ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆100Updated 4 months ago
- [ECCV 2024] Official Release of SILC: Improving vision language pretraining with self-distillation☆45Updated 10 months ago
- Library implementation of "No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations"☆38Updated 9 months ago
- Official PyTorch implementation of the paper "CoVR: Learning Composed Video Retrieval from Web Video Captions".☆109Updated 4 months ago
- Code implementation of our ICCV 2025 paper: On Large Multimodal Models as Open-World Image Classifiers☆22Updated last week
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 8 months ago
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆77Updated 2 months ago
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆33Updated 9 months ago
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite fo…☆49Updated 11 months ago
- AlignCLIP: Improving Cross-Modal Alignment in CLIP (ICLR 2025)☆43Updated 5 months ago
- Code for Scaling Language-Free Visual Representation Learning (WebSSL)☆246Updated 3 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆82Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆56Updated last year
- Sapsucker Woods 60 Audiovisual Dataset☆15Updated 2 years ago
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆29Updated 5 months ago
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆91Updated 3 months ago
- Official repository for the MMFM challenge☆25Updated last year
- ☆51Updated 6 months ago