A curated list of egocentric (first-person) vision and related area resources
β312Oct 14, 2024Updated last year
Alternatives and similar repositories for awesome-egocentric-vision
Users that are interested in awesome-egocentric-vision are comparing it to the libraries listed below
Sorting:
- π Explore Egocentric Vision: research, data, challenges, real-world apps. Stay updated & contribute to our dynamic repository! Work-in-pβ¦β125Nov 23, 2024Updated last year
- [NeurIPS 2022] Egocentric Video-Language Pretrainingβ256May 9, 2024Updated last year
- N-EPIC-Kitchens: The event-based camera extension of the large-scale EPIC-Kitchens dataset.β23May 10, 2022Updated 3 years ago
- Annotations for the public release of the EPIC-KITCHENS-100 datasetβ167Aug 1, 2022Updated 3 years ago
- Code implementation for our ICPR, 2020 paper titled "Improving Word Recognition using Multiple Hypotheses and Deep Embeddings"β21May 21, 2021Updated 4 years ago
- For Ego4D VQ3D Taskβ22Jan 9, 2024Updated 2 years ago
- A repo for processing the raw hand object detections to produce releasable pickles + library for using theseβ41Oct 26, 2024Updated last year
- Code for the Joint Part-of-Speech Embedding modelβ13Feb 16, 2023Updated 3 years ago
- Implementation of "EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition, ICCV, 2019" in PyTorchβ112Jan 25, 2021Updated 5 years ago
- Integrating Human Gaze into Attention for Egocentric Activity Recognition (WACV 2021)β25Jul 20, 2023Updated 2 years ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)β34Jun 12, 2023Updated 2 years ago
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generationβ¦β39Feb 24, 2025Updated last year
- β57Apr 28, 2025Updated 10 months ago
- [CVPR 2022] Sequential Voting with Relational Box Fields for Active Object Detectionβ10Jun 19, 2022Updated 3 years ago
- Code implementation for our ECCV, 2022 paper titled "My View is the Best View: Procedure Learning from Egocentric Videos"β34Feb 5, 2024Updated 2 years ago
- β132May 30, 2024Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024β133May 11, 2025Updated 10 months ago
- [CVPR 2023] Egocentric Audio-Visual Object Localizationβ26Jan 6, 2024Updated 2 years ago
- Official implementation of "A Backpack Full of Skills: Egocentric Video Understanding with Diverse Task Perspectives", accepted at CVPR 2β¦β24Jun 13, 2024Updated last year
- Support library for the MaskRCNN masks extracted on EPIC-KITCHENS-100β14Dec 1, 2020Updated 5 years ago
- A curated list of Egocentric Action Understanding resourcesβ46Nov 26, 2025Updated 3 months ago
- Code accompanying Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos (CVPR 2021)β35Jun 8, 2021Updated 4 years ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videosβ71Jan 29, 2024Updated 2 years ago
- β78Jan 5, 2024Updated 2 years ago
- π΄ Annotations for the EPIC KITCHENS-55 Dataset.β155Mar 17, 2021Updated 5 years ago
- Code release for "Learning Video Representations from Large Language Models"β534Oct 1, 2023Updated 2 years ago
- β33Dec 4, 2025Updated 3 months ago
- β35Mar 22, 2022Updated 3 years ago
- β19Sep 10, 2021Updated 4 years ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.β43Apr 17, 2023Updated 2 years ago
- Repository for ICCV 2021 paper "Partial Video Domain Adaptation with Partial Adversarial Temporal Attentive Network"β17Mar 30, 2023Updated 2 years ago
- β25Nov 22, 2019Updated 6 years ago
- Scene-aware Egocentric 3D Human Pose Estimationβ32May 17, 2025Updated 10 months ago
- EPIC-KITCHENS-55 baselines for Action Recognitionβ75Jul 14, 2020Updated 5 years ago
- EventEgo3D: 3D Human Motion Capture from Egocentric Event Streams [CVPR'24]β32Jul 23, 2025Updated 7 months ago
- Shapley values for assessing the importance of each frame in a videoβ17Mar 1, 2021Updated 5 years ago
- Code for the Paper: Antonino Furnari and Giovanni Maria Farinella. What Would You Expect? Anticipating Egocentric Actions with Rolling-Unβ¦β134Aug 23, 2023Updated 2 years ago
- Official code for future vehicle localization paper implemented in Kerasβ17Feb 1, 2021Updated 5 years ago
- Official repo of our ECCV 2022 paper "GIMO: Gaze-Informed Human Motion Prediction in Context"β86Dec 16, 2022Updated 3 years ago