showlab / afformer
Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)
☆43Updated 8 months ago
Alternatives and similar repositories for afformer:
Users that are interested in afformer are comparing it to the libraries listed below
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆55Updated 5 months ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆32Updated 7 months ago
- [ECCV 2022] AssistQ: Affordance-centric Question-driven Task Completion for Egocentric Assistant☆20Updated last year
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆32Updated last year
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆26Updated 11 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆61Updated last year
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆17Updated last year
- ☆69Updated 4 months ago
- This the official repository of OCL (ICCV 2023).☆19Updated last year
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆56Updated 7 months ago
- ☆46Updated 4 months ago
- ☆19Updated last year
- Latent Motion Token as the Bridging Language for Robot Manipulation☆81Updated 3 weeks ago
- Python scripts to download Assembly101 from Google Drive☆40Updated 6 months ago
- Code for ECCV2022 Paper "Mining Cross-Person Cues for Body-Part Interactiveness Learning in HOI Detection"☆36Updated 2 years ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆36Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆42Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated 6 months ago
- PyTorch implementation of RoLD: Robot Latent Diffusion for Multi-Task Policy Modeling (MMM2025 Best Paper)☆17Updated 8 months ago
- ☆25Updated last year
- [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation …☆37Updated last month
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated 2 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆40Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆46Updated 2 months ago
- Official Implementation of CAPEAM (ICCV'23)☆12Updated 4 months ago
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆26Updated 2 weeks ago
- Code and Dataset for the CVPRW Paper "Where did I leave my keys? — Episodic-Memory-Based Question Answering on Egocentric Videos"☆25Updated last year
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆58Updated last year
- Bidirectional Mapping between Action Physical-Semantic Space☆31Updated 7 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆35Updated this week