fpv-iplab / Differentiable-Task-Graph-LearningLinks
Code for the paper "Differentiable Task Graph Learning: Procedural Activity Representation and Online Mistake Detection from Egocentric Videos" [NeurIPS (spotlight), 2024]
☆20Updated last year
Alternatives and similar repositories for Differentiable-Task-Graph-Learning
Users that are interested in Differentiable-Task-Graph-Learning are comparing it to the libraries listed below
Sorting:
- The official PyTorch implementation of the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) '24 paper PREGO: online mistake detect…☆31Updated 7 months ago
- Python scripts to download Assembly101 from Google Drive☆61Updated last year
- [BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for…☆30Updated 11 months ago
- Progress-Aware Online Action Segmentation for Egocentric Procedural Task Videos☆28Updated last year
- 🔍 Explore Egocentric Vision: research, data, challenges, real-world apps. Stay updated & contribute to our dynamic repository! Work-in-p…☆124Updated last year
- Data release for Step Differences in Instructional Video (CVPR24)☆14Updated last year
- (ECCV 2024) Official repository of paper "EgoExo-Fitness: Towards Egocentric and Exocentric Full-Body Action Understanding"☆32Updated 10 months ago
- The official implementation of Error Detection in Egocentric Procedural Task Videos☆21Updated 4 months ago
- [CVPR 2023] LOGO: A Long-Form Video Dataset for Group Action Quality Assessment☆46Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆102Updated last year
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- (CVPR 2023) Official implemention of the paper "Weakly Supervised Video Representation Learning with Unaligned Text for Sequential Videos…☆31Updated last year
- Annotations for the Mistake Detection benchmark of Assembly101☆10Updated 2 years ago
- Code for Diffusion Action Segmentation (ICCV 2023)☆73Updated 2 years ago
- Annotations for the public release of the EPIC-KITCHENS-100 dataset☆163Updated 3 years ago
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆24Updated last year
- [ICCV 2021] Group-aware Contrastive Regression for Action Quality Assessment☆80Updated 4 years ago
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆19Updated last year
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆138Updated 5 months ago
- Official Repo for CVPR 2024 Paper "FACT: Frame-Action Cross-Attention Temporal Modeling for Efficient Fully-Supervised Action Segmentatio…☆84Updated 2 weeks ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆132Updated 8 months ago
- Pytorch code for Frame-wise Action Representations for Long Videos via Sequence Contrastive Learning, CVPR2022.☆95Updated 2 years ago
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆31Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 11 months ago
- MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge (ICCV 2023)☆30Updated 2 years ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated 2 years ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆53Updated last year
- Sports-QA: A Large-Scale Video Question Answering Benchmark for Complex and Professional Sports☆40Updated last month
- COM Kitchens: An Unedited Overhead-view Video Dataset as a Vision-Language Benchmark☆14Updated last year