reddyav1 / RoCoG-v2Links
RoCoG-v2 (Robot Control Gestures) is a dataset intended to support the study of synthetic-to-real and ground-to-air video domain adaptation.
☆16Updated last year
Alternatives and similar repositories for RoCoG-v2
Users that are interested in RoCoG-v2 are comparing it to the libraries listed below
Sorting:
- ☆73Updated last year
- ☆79Updated 3 years ago
- Python scripts to download Assembly101 from Google Drive☆51Updated last year
- This repo contains the code for the recipe of the winning entry to the Ego4d VQ2D challenge at CVPR 2022.☆41Updated 2 years ago
- Code implementation for our ECCV, 2022 paper titled "My View is the Best View: Procedure Learning from Egocentric Videos"☆30Updated last year
- Future Transformer for Long-term Action Anticipation (CVPR 2022)☆49Updated 2 years ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated last year
- 🔍 Explore Egocentric Vision: research, data, challenges, real-world apps. Stay updated & contribute to our dynamic repository! Work-in-p…☆119Updated 10 months ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆129Updated 4 months ago
- Simple PyTorch Dataset for the EPIC-Kitchens-55 and EPIC-Kitchens-100 that handles frames and features (rgb, optical flow, and objects) f…☆24Updated 2 years ago
- ☆27Updated 2 years ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆44Updated 6 months ago
- [CVPR2022] Animal Kingdom: A Large and Diverse Dataset for Animal Behavior Understanding☆148Updated 10 months ago
- [CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception☆122Updated 3 years ago
- CVPR 2024 "Instance Tracking in 3D Scenes from Egocentric Videos"☆19Updated last year
- A curated list of egocentric (first-person) vision and related area resources☆296Updated 11 months ago
- [BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for…☆28Updated 7 months ago
- ☆127Updated last year
- Code and models for the Action Recognition benchmark of Assembly101☆11Updated 2 years ago
- [ECCV2024] The official implementation of "Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation".☆13Updated 7 months ago
- The official project website of "Ske2Grid: Skeleton-to-Grid Representation Learning for Action Recognition" (The paper of Ske2Grid is pub…☆19Updated 2 years ago
- [ICCV 2023] Data-Free Class-Incremental Hand Gesture Recognition☆16Updated 2 years ago
- For Ego4D VQ3D Task☆22Updated last year
- [CVPR 2022 (oral)] Bongard-HOI for benchmarking few-shot visual reasoning☆72Updated 2 years ago
- Code for ECCV2022 "Real-time Online Video Detection with Temporal Smoothing Transformers"☆111Updated last month
- Code for the paper "Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundatio…☆28Updated last year
- The MECCANO Dataset: official repository in which we provide code and models.☆32Updated 2 years ago
- Bidirectional Mapping between Action Physical-Semantic Space☆31Updated last month
- Video + CLIP Baseline for Ego4D Long Term Action Anticipation Challenge (CVPR 2022)☆15Updated 3 years ago