cvlab-columbia / hyperfutureLinks
Code for the paper Learning the Predictability of the Future (CVPR 2021)
☆168Updated last year
Alternatives and similar repositories for hyperfuture
Users that are interested in hyperfuture are comparing it to the libraries listed below
Sorting:
- Code release for ICCV 2021 paper "Anticipative Video Transformer"☆152Updated 3 years ago
- ☆69Updated last year
- ☆54Updated 3 years ago
- ☆84Updated last year
- Download scripts for EPIC-KITCHENS☆139Updated 10 months ago
- CATER: A diagnostic dataset for Compositional Actions and TEmporal Reasoning☆105Updated 4 years ago
- Learning Long-term Visual Dynamics with Region Proposal Interaction Networks (ICLR 2021)☆113Updated 3 years ago
- Code implementation for our ECCV, 2022 paper titled "My View is the Best View: Procedure Learning from Egocentric Videos"☆28Updated last year
- Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification☆133Updated 4 years ago
- Code for Look for the Change paper published at CVPR 2022☆36Updated 2 years ago
- ☆166Updated 2 years ago
- Annotations for the public release of the EPIC-KITCHENS-100 dataset☆148Updated 2 years ago
- ☆70Updated last year
- [NeurIPS 2021 Spotlight] Learning to Compose Visual Relations☆101Updated 2 years ago
- This is the pytorch version of tcc loss, used in paper 'Temporal Cycle-Consistency Learning'.☆26Updated 4 years ago
- Code repository for the paper: 'Something-Else: Compositional Action Recognition with Spatial-Temporal Interaction Networks'☆147Updated last year
- Repository for "Space-Time Correspondence as a Contrastive Random Walk" (NeurIPS 2020)☆271Updated 3 years ago
- What Can You Learn from Your Muscles? Learning Visual Representation from Human Interactions (https://arxiv.org/pdf/2010.08539.pdf)☆39Updated 4 years ago
- [ECCV'20 Spotlight] Memory-augmented Dense Predictive Coding for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.☆165Updated 4 years ago
- This repo covers the implementation for Labelling unlabelled videos from scratch with multi-modal self-supervision, which learns clusters…☆116Updated 4 years ago
- ☆73Updated 3 years ago
- ☆67Updated 2 years ago
- EgoCom: A Multi-person Multi-modal Egocentric Communications Dataset☆57Updated 4 years ago
- Visualizing the learned space-time attention using Attention Rollout☆37Updated 3 years ago
- ☆30Updated 3 years ago
- RareAct: A video dataset of unusual interactions☆32Updated 4 years ago
- Notes on preparing for coding interviews during my PhD☆73Updated 3 years ago
- Code for the Paper: Antonino Furnari and Giovanni Maria Farinella. What Would You Expect? Anticipating Egocentric Actions with Rolling-Un…