soCzech / LookForTheChange
Code for Look for the Change paper published at CVPR 2022
☆35Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for LookForTheChange
- ☆37Updated 2 years ago
- [Findings of EMNLP 2022] AssistSR: Task-oriented Video Segment Retrieval for Personal AI Assistant☆23Updated last year
- RareAct: A video dataset of unusual interactions☆32Updated 4 years ago
- Code accompanying EGO-TOPO: Environment Affordances from Egocentric Video (CVPR 2020)☆29Updated 2 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆46Updated last year
- Official code for NeurRIPS 2020 paper "Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D"☆26Updated last year
- [ICLR 2022] RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning☆64Updated 2 years ago
- ☆67Updated 10 months ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆28Updated last year
- ☆39Updated 9 months ago
- ☆77Updated 2 years ago
- A repo for processing the raw hand object detections to produce releasable pickles + library for using these☆35Updated 3 weeks ago
- Official Repository of NeurIPS2021 paper: PTR☆33Updated 2 years ago
- This repo contains the pytorch implementation for Dynamic Concept Learner (accepted by ICLR 2021).☆37Updated 4 months ago
- Visualisation of VISOR Segmentations with Annotations and Relations☆21Updated 2 years ago
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆51Updated last year
- ☆19Updated last year
- Code for "Compositional Video Synthesis with Action Graphs", Bar & Herzig et al., ICML 2021☆30Updated last year
- ☆72Updated 2 years ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆19Updated last month
- We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances…☆45Updated 3 years ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆20Updated 7 months ago
- Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"☆23Updated last year
- Code for ECCV 2020 paper - LEMMA: A Multi-view Dataset for LEarning Multi-agent Multi-task Activities☆28Updated 3 years ago
- [CVPR 2022 (oral)] Bongard-HOI for benchmarking few-shot visual reasoning☆64Updated 2 years ago
- Code for Learning to Learn Language from Narrated Video☆33Updated last year
- [NeurIPS 2021 Spotlight] Learning to Compose Visual Relations☆101Updated last year
- A dataset for multi-object multi-actor activity parsing☆34Updated last year
- ☆83Updated 8 months ago