epic-kitchens / epic-kitchens-100-annotations
Annotations for the public release of the EPIC-KITCHENS-100 dataset
☆144Updated 2 years ago
Alternatives and similar repositories for epic-kitchens-100-annotations:
Users that are interested in epic-kitchens-100-annotations are comparing it to the libraries listed below
- Download scripts for EPIC-KITCHENS☆136Updated 8 months ago
- ☆69Updated last year
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆238Updated 11 months ago
- ☆28Updated 3 years ago
- ☆116Updated 10 months ago
- Code release for ICCV 2021 paper "Anticipative Video Transformer"☆153Updated 3 years ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆97Updated 9 months ago
- Video narrator written in Python/GTK using vlc-lib☆25Updated 2 years ago
- The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding☆62Updated 3 years ago
- Python scripts to download Assembly101 from Google Drive☆41Updated 6 months ago
- Simple PyTorch Dataset for the EPIC-Kitchens-55 and EPIC-Kitchens-100 that handles frames and features (rgb, optical flow, and objects) f…☆24Updated 2 years ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆128Updated 8 months ago
- EPIC-Kitchens-100 Action Recognition baselines: TSN, TRN, TSM☆32Updated 3 years ago
- [CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos☆98Updated 2 years ago
- A curated list of egocentric (first-person) vision and related area resources☆281Updated 6 months ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆127Updated last month
- Code repository for the paper: 'Something-Else: Compositional Action Recognition with Spatial-Temporal Interaction Networks'☆146Updated last year
- Code implementation for our ECCV, 2022 paper titled "My View is the Best View: Procedure Learning from Egocentric Videos"☆28Updated last year
- [CVPR'22 Oral] Temporal Alignment Networks for Long-term Video. Tengda Han, Weidi Xie, Andrew Zisserman.☆116Updated last year
- Pytorch code for Frame-wise Action Representations for Long Videos via Sequence Contrastive Learning, CVPR2022.☆90Updated last year
- S3D Text-Video model trained on HowTo100M using MIL-NCE☆195Updated 4 years ago
- ☆76Updated 2 years ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆32Updated 2 years ago
- Code for the Paper: Antonino Furnari and Giovanni Maria Farinella. What Would You Expect? Anticipating Egocentric Actions with Rolling-Un…☆132Updated last year
- ☆11Updated 2 years ago
- [ECCV 2022] Official Pytorch Implementation of the paper : " Zero-Shot Temporal Action Detection via Vision-Language Prompting "☆104Updated last year
- ☆84Updated last year
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆179Updated last year
- [ICCV 2021] Target Adaptive Context Aggregation for Video Scene Graph Generation☆58Updated 2 years ago
- Home Action Genome: Cooperative Contrastive Action Understanding☆20Updated 3 years ago