StanfordVL / atp-video-languageLinks
Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (ATP).
☆50Updated last year
Alternatives and similar repositories for atp-video-language
Users that are interested in atp-video-language are comparing it to the libraries listed below
Sorting:
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆72Updated 11 months ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated 2 years ago
- ☆34Updated last year
- ☆74Updated 6 months ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆28Updated last year
- [EMNLP'22] Weakly-Supervised Temporal Article Grounding☆14Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆101Updated 4 months ago
- [ECCV 22] LocVTP: Video-Text Pre-training for Temporal Localization☆39Updated 2 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆134Updated 2 years ago
- [2023 ACL] CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding☆31Updated last year
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆50Updated last year
- Video Graph Transformer for Video Question Answering (ECCV'22)☆47Updated last year
- ☆31Updated 3 years ago
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 8 months ago
- A reading list of papers about Visual Grounding.☆31Updated 2 years ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆155Updated 10 months ago
- Contrastive Video Question Answering via Video Graph Transformer (IEEE T-PAMI'23)☆19Updated last year
- [NeurIPS 2022] Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video Grounding☆50Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- A Unified Framework for Video-Language Understanding☆57Updated last year
- Official pytorch implementation of "Explore-And-Match: Bridging Proposal-Based and Proposal-Free With Transformer for Sentence Grounding …☆42Updated 2 years ago
- The champion solution for Ego4D Natural Language Queries Challenge in CVPR 2023☆17Updated last year
- [ICCV 2023] The official PyTorch implementation of the paper: "Localizing Moments in Long Video Via Multimodal Guidance"☆19Updated 8 months ago
- Code and Dataset for the CVPRW Paper "Where did I leave my keys? — Episodic-Memory-Based Question Answering on Egocentric Videos"☆25Updated last year
- Winner solution to Generic Event Boundary Captioning task in LOVEU Challenge (CVPR 2023 workshop)☆29Updated last year
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆43Updated last year
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- Code for the paper "Zero-shot Natural Language Video Localization" (ICCV2021, Oral).☆47Updated 2 years ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆185Updated last year