facebookresearch / Ego-ExoView external linksLinks
Code accompanying Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos (CVPR 2021)
☆35Jun 8, 2021Updated 4 years ago
Alternatives and similar repositories for Ego-Exo
Users that are interested in Ego-Exo are comparing it to the libraries listed below
Sorting:
- This is an official implementation of video classification for our CVPR 2020 paper "Non-Local Neural Networks With Grouped Bilinear Atten…☆12Jan 30, 2021Updated 5 years ago
- The 1st place solution of 2022 Ego4d Natural Language Queries.☆32Sep 5, 2022Updated 3 years ago
- [ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding☆78Oct 6, 2023Updated 2 years ago
- [ECCV 2022] Official Pytorch Implementation of the paper : " Zero-Shot Temporal Action Detection via Vision-Language Prompting "☆112Aug 3, 2023Updated 2 years ago
- "Describing Textures using Natural Language" code and data, ECCV 2020 Oral.☆17Aug 6, 2020Updated 5 years ago
- ☆18Jul 26, 2023Updated 2 years ago
- The Pytorch implementation for "Video-Text Pre-training with Learned Regions"☆42Jul 15, 2022Updated 3 years ago
- Code and data for the project "Visually grounded continual learning of compositional semantics"☆22Dec 27, 2022Updated 3 years ago
- ☆193Oct 22, 2022Updated 3 years ago
- Progress-Aware Online Action Segmentation for Egocentric Procedural Task Videos☆28Sep 9, 2024Updated last year
- ☆132May 30, 2024Updated last year
- Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.☆49Mar 18, 2021Updated 4 years ago
- Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".☆97Jul 16, 2023Updated 2 years ago
- Paper list of human object interaction (HOI)☆54Dec 16, 2020Updated 5 years ago
- Panoramic audiovisual salient object segmentation☆30Jul 9, 2023Updated 2 years ago
- code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction☆100May 13, 2021Updated 4 years ago
- Online Product Reviews for Affordances☆24Dec 12, 2018Updated 7 years ago
- Python scripts to download Assembly101 from Google Drive☆63Oct 10, 2024Updated last year
- Dense Regression Network for Video Grounding (CVPR2020)☆53Jan 28, 2021Updated 5 years ago
- Cross Modal Retrieval with Querybank Normalisation☆57Nov 21, 2023Updated 2 years ago
- Code for the VOST dataset☆26Oct 1, 2023Updated 2 years ago
- ☆33Mar 22, 2022Updated 3 years ago
- Official Code for ACL 2023 Outstanding Paper: World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Languag…☆33Oct 20, 2023Updated 2 years ago
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆254May 9, 2024Updated last year
- ☆31Mar 5, 2025Updated 11 months ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Sep 9, 2024Updated last year
- Pytorch Implementation of Videos as Space-Time Region Graphs☆27May 30, 2025Updated 8 months ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆183Mar 4, 2024Updated last year
- Code for "Compositional Video Synthesis with Action Graphs", Bar & Herzig et al., ICML 2021☆32Nov 22, 2022Updated 3 years ago
- Official implementation of ACMMM'20 paper 'Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework'☆112Mar 22, 2021Updated 4 years ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆133May 11, 2025Updated 9 months ago
- [ICCV2021] Generic Event Boundary Detection: A Benchmark for Event Segmentation☆75Dec 28, 2021Updated 4 years ago
- The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding☆64Mar 9, 2022Updated 3 years ago
- ☆80Sep 4, 2022Updated 3 years ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Nov 7, 2023Updated 2 years ago
- A curated list of resources about long-context in large-language models and video understanding.☆31Aug 8, 2023Updated 2 years ago
- Code for ECCV 2020 paper - LEMMA: A Multi-view Dataset for LEarning Multi-agent Multi-task Activities☆30Apr 8, 2021Updated 4 years ago
- ☆78Jan 5, 2024Updated 2 years ago
- Tracking with Human-Intent Reasoning☆74Nov 4, 2024Updated last year