facebookresearch / selective-vqa_oodLinks
Implementation for the CVPR 2023 paper "Improving Selective Visual Question Answering by Learning from Your Peers" (https://arxiv.org/abs/2306.08751)
☆25Updated 2 years ago
Alternatives and similar repositories for selective-vqa_ood
Users that are interested in selective-vqa_ood are comparing it to the libraries listed below
Sorting:
- ☆87Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆102Updated 2 years ago
- ☆65Updated 2 years ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024☆106Updated last year
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆186Updated 2 years ago
- ☆30Updated 2 years ago
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆22Updated 3 weeks ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆34Updated last year
- Data-Efficient Multimodal Fusion on a Single GPU☆67Updated last year
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆35Updated 6 months ago
- [ACL 2024 Findings & ICLR 2024 WS] An Evaluator VLM that is open-source, offers reproducible evaluation, and inexpensive to use. Specific…☆78Updated last year
- [CVPR 2025] Official PyTorch implementation of MaskSub "Masking meets Supervision: A Strong Learning Alliance"☆45Updated 7 months ago
- An official PyTorch implementation for CLIPPR☆29Updated 2 years ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆60Updated 2 years ago
- REACT (CVPR 2023, Highlight 2.5%)☆138Updated 2 years ago
- [ECCV2024][ICCV2023] Official PyTorch implementation of SeiT++ and SeiT☆55Updated last year
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago
- RO-ViT CVPR 2023 "Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers"☆18Updated 2 years ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆36Updated last year
- research work on multimodal cognitive ai☆66Updated 4 months ago
- FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions☆55Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆35Updated last year
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆91Updated last year
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆30Updated 2 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago