leonnnop / VARLinks
[CVPR 2022] Visual Abductive Reasoning
☆123Updated last year
Alternatives and similar repositories for VAR
Users that are interested in VAR are comparing it to the libraries listed below
Sorting:
- ☆88Updated 3 years ago
- ☆65Updated 2 years ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆151Updated 2 years ago
- [SIGIR 2022] CenterCLIP: Token Clustering for Efficient Text-Video Retrieval. Also, a text-video retrieval toolbox based on CLIP + fast p…☆133Updated 3 years ago
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆78Updated last year
- [CVPR 2022 (oral)] Bongard-HOI for benchmarking few-shot visual reasoning☆72Updated 3 years ago
- [CVPR 2022] A large-scale public benchmark dataset for video question-answering, especially about evidence and commonsense reasoning. The…☆73Updated 4 months ago
- [NeurIPS 2022] Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video Grounding☆53Updated last year
- This is the code of ECCV 2022 (Oral) paper "Fine-Grained Scene Graph Generation with Data Transfer".☆103Updated 2 years ago
- Official Implementation for paper "Referring Transformer: A One-step Approach to Multi-task Visual Grounding" Neurips 2021☆69Updated 3 years ago
- Code accompanying Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos (CVPR 2021)☆34Updated 4 years ago
- Official code for the ICLR2023 paper Compositional Prompt Tuning with Motion Cues for Open-vocabulary Video Relation Detection☆43Updated last year
- [ICCV 2021] Official code for "Learning to Generate Scene Graph from Natural Language Supervision"☆101Updated 2 years ago
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆190Updated 2 years ago
- The 1st place solution of 2022 Ego4d Natural Language Queries.☆32Updated 3 years ago
- A Unified Framework for Video-Language Understanding☆60Updated 2 years ago
- A reading list of papers about Visual Grounding.☆32Updated 3 years ago
- [CVPR 2022] Pseudo-Q: Generating Pseudo Language Queries for Visual Grounding☆152Updated last year
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆55Updated 2 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 3 years ago
- Compress conventional Vision-Language Pre-training data☆52Updated 2 years ago
- Series of work (ECCV2020, CVPR2021, CVPR2021, ECCV2022) about Compositional Learning for Human-Object Interaction Exploration☆80Updated 2 years ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆179Updated last year
- Winner solution to Generic Event Boundary Captioning task in LOVEU Challenge (CVPR 2023 workshop)☆30Updated last year
- ☆33Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Updated 2 years ago
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'