showlab / afformerLinks
Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)
☆44Updated last year
Alternatives and similar repositories for afformer
Users that are interested in afformer are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆69Updated last year
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Updated last year
- [ECCV 2022] AssistQ: Affordance-centric Question-driven Task Completion for Egocentric Assistant☆21Updated 5 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Updated last year
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆35Updated 2 years ago
- Python scripts to download Assembly101 from Google Drive☆57Updated last year
- This the official repository of OCL (ICCV 2023).☆25Updated last year
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆132Updated 6 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆45Updated 2 years ago
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆19Updated last year
- ☆60Updated 11 months ago
- [CVPR 2022] Sequential Voting with Relational Box Fields for Active Object Detection☆10Updated 3 years ago
- Bidirectional Mapping between Action Physical-Semantic Space☆32Updated 2 months ago
- ☆13Updated 2 years ago
- Official Implementation of CAPEAM (ICCV'23)☆14Updated 11 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆71Updated last week
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆42Updated 2 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆74Updated 3 months ago
- [ICLR'25] Do Egocentric Video-Language Models Truly Understand Hand-Object Interactions?☆10Updated 7 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆88Updated 5 months ago
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- ☆19Updated 2 years ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆46Updated 2 years ago
- (ECCV 2024) Official repository of paper "EgoExo-Fitness: Towards Egocentric and Exocentric Full-Body Action Understanding"☆31Updated 7 months ago
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 9 months ago
- [WIP] Code for LangToMo