soCzech / MultiTaskObjectStatesLinks
Code for the paper "Multi-Task Learning of Object States and State-Modifying Actions from Web Videos" published in TPAMI
☆11Updated last year
Alternatives and similar repositories for MultiTaskObjectStates
Users that are interested in MultiTaskObjectStates are comparing it to the libraries listed below
Sorting:
- A repo for processing the raw hand object detections to produce releasable pickles + library for using these☆39Updated last year
- Code for ECCV 2020 paper - LEMMA: A Multi-view Dataset for LEarning Multi-agent Multi-task Activities☆30Updated 4 years ago
- Code accompanying EGO-TOPO: Environment Affordances from Egocentric Video (CVPR 2020)☆31Updated 3 years ago
- ChangeIt dataset with more than 2600 hours of video with state-changing actions published at CVPR 2022☆11Updated 3 years ago
- Code for Look for the Change paper published at CVPR 2022☆36Updated 3 years ago
- Learning interaction hotspots from egocentric video☆52Updated 3 years ago
- Code for the paper: F. Ragusa, G. M. Farinella, A. Furnari. StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipat…☆13Updated 2 years ago
- EPIC-Kitchens-100 Action Recognition baselines: TSN, TRN, TSM☆32Updated 3 years ago
- RareAct: A video dataset of unusual interactions☆33Updated 5 years ago
- Online Product Reviews for Affordances☆23Updated 7 years ago
- Code for the paper Joint Discovery of Object States and Manipulation Actions, ICCV 2017☆14Updated 7 years ago
- [ICLR 2022] RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning☆63Updated 3 years ago
- Code for "Compositional Video Synthesis with Action Graphs", Bar & Herzig et al., ICML 2021☆32Updated 3 years ago
- Learning Long-term Visual Dynamics with Region Proposal Interaction Networks (ICLR 2021)☆113Updated 3 years ago
- As a part of the HAKE project (HAKE-Object), code for SymNet (CVPR'20 and TPAMI'21).☆53Updated 3 years ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆26Updated last year
- As a part of HAKE project (HAKE-3D). Code for our CVPR2020 paper "Detailed 2D-3D Joint Representation for Human-Object Interaction".☆103Updated 3 years ago
- ☆80Updated 3 years ago
- ☆25Updated 6 years ago
- Video narrator written in Python/GTK using vlc-lib☆25Updated 3 years ago
- ☆40Updated 3 years ago
- Official code for NeurRIPS 2020 paper "Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D"☆31Updated last year
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆36Updated 2 years ago
- Use the Force Luke! Learning to Predict Physical Forces by Simulating Effects [CVPR2020] (https://arxiv.org/pdf/2003.12045.pdf)☆74Updated 2 years ago
- ☆76Updated last year
- ☆28Updated 6 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Updated 11 months ago
- [CVPR 2022 (oral)] Bongard-HOI for benchmarking few-shot visual reasoning☆73Updated 3 years ago
- ☆93Updated 3 years ago
- Python scripts to download Assembly101 from Google Drive☆59Updated last year