lhc1224 / Cross-View-AGLinks
Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022
☆66Updated 9 months ago
Alternatives and similar repositories for Cross-View-AG
Users that are interested in Cross-View-AG are comparing it to the libraries listed below
Sorting:
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆40Updated 2 years ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated last year
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆42Updated last year
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆68Updated last year
- This the official repository of OCL (ICCV 2023).☆24Updated last year
- ☆35Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆58Updated 6 months ago
- ☆55Updated 8 months ago
- ☆26Updated last month
- [IROS 2023] Open-Vocabulary Affordance Detection in 3d Point Clouds☆72Updated 11 months ago
- [ICRA 2024] Language-Conditioned Affordance-Pose Detection in 3D Point Clouds☆42Updated 7 months ago
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆86Updated 2 months ago
- code for affordance-r1☆24Updated this week
- ☆33Updated 11 months ago
- [CVPR 2025🎉] Official implementation for paper "Point-Level Visual Affordance Guided Retrieval and Adaptation for Cluttered Garments Man…☆38Updated 5 months ago
- OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding☆18Updated 3 weeks ago
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated 7 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆35Updated 8 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆135Updated 2 weeks ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated last year
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆85Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆122Updated 3 months ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆33Updated 11 months ago
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆41Updated 2 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆43Updated this week
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆83Updated 2 months ago
- [WIP] Code for LangToMo☆16Updated 2 months ago
- official repo for AGNOSTOS, a cross-task manipulation benchmark, and X-ICM method, a cross-task in-context manipulation (VLA) method☆36Updated 2 months ago
- This is the official code repo for GLOVER and GLOVER++.☆24Updated 3 weeks ago