furkanbiten / object-biasLinks
Let there be clock in the beach - WACV 2022
☆15Updated 3 years ago
Alternatives and similar repositories for object-bias
Users that are interested in object-bias are comparing it to the libraries listed below
Sorting:
- CLIP4IDC: CLIP for Image Difference Captioning (AACL 2022)☆34Updated 2 years ago
- Official implementation of the Composed Image Retrieval using Pretrained LANguage Transformers (CIRPLANT) | ICCV 2021 - Image Retrieval o…☆40Updated last year
- The Pytorch implementation for "Video-Text Pre-training with Learned Regions"☆42Updated 3 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆87Updated 2 years ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- Pytorch version of DeCEMBERT: Learning from Noisy Instructional Videos via Dense Captions and Entropy Minimization (NAACL 2021)☆17Updated 2 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated 2 years ago
- Research code for CVPR 2022 paper: "EMScore: Evaluating Video Captioning via Coarse-Grained and Fine-Grained Embedding Matching"☆26Updated 2 years ago
- This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in th…☆67Updated 3 years ago
- Scene Text Aware Cross Modal Retrieval (StacMR)☆24Updated 4 years ago
- Data Release for VALUE Benchmark☆31Updated 3 years ago
- ☆26Updated 3 years ago
- Ask&Confirm: Active Detail Enriching for Cross-Modal Retrieval with Partial Query (ICCV2021)☆20Updated 3 years ago
- A Unified Framework for Video-Language Understanding☆59Updated 2 years ago
- [ECCV'22 Poster] Explicit Image Caption Editing☆22Updated 2 years ago
- ☆31Updated 4 years ago
- Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"☆19Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 3 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Updated 3 years ago
- [CVPR 2022] The code for our paper 《Object-aware Video-language Pre-training for Retrieval》☆62Updated 3 years ago
- Language-Agnostic Visual-Semantic Embeddings (ICCV'19)☆22Updated 5 years ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆46Updated last year
- Official repository of ICCV 2021 - Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Models☆120Updated 4 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 10 months ago
- Official code for paper "Spatially Aware Multimodal Transformers for TextVQA" published at ECCV, 2020.☆64Updated 4 years ago
- ☆30Updated 2 years ago
- ☆27Updated 3 years ago
- [AAAI 2021] Confidence-aware Non-repetitive Multimodal Transformers for TextCaps☆24Updated 2 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago