facebookresearch / selective-vqa_oodLinks
Implementation for the CVPR 2023 paper "Improving Selective Visual Question Answering by Learning from Your Peers" (https://arxiv.org/abs/2306.08751)
☆25Updated 2 years ago
Alternatives and similar repositories for selective-vqa_ood
Users that are interested in selective-vqa_ood are comparing it to the libraries listed below
Sorting:
- ☆87Updated 2 years ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆102Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- ☆30Updated 2 years ago
- ☆65Updated 2 years ago
- ☆50Updated 2 years ago
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆30Updated 2 years ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024☆110Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated 2 years ago
- research work on multimodal cognitive ai☆68Updated last month
- FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions☆55Updated last year
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆44Updated 9 months ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Updated last month
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆93Updated last year
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆146Updated 2 weeks ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆69Updated 9 months ago
- Official implementation of "Active Image Indexing"☆60Updated 2 years ago
- Data-Efficient Multimodal Fusion on a Single GPU☆68Updated last year
- Official repo for StableLLAVA☆95Updated 2 years ago
- An official codebase for paper " CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos (ICCV 23)"☆52Updated 2 years ago
- [ACL 2024 Findings & ICLR 2024 WS] An Evaluator VLM that is open-source, offers reproducible evaluation, and inexpensive to use. Specific…☆79Updated last year
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Updated last year
- [CVPR 2025] Official PyTorch implementation of MaskSub "Masking meets Supervision: A Strong Learning Alliance"☆45Updated 10 months ago
- RO-ViT CVPR 2023 "Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers"☆18Updated 2 years ago
- Matryoshka Multimodal Models☆121Updated last year
- ☆36Updated 2 years ago