lhc1224 / Cross-View-AGLinks
Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022
☆71Updated last year
Alternatives and similar repositories for Cross-View-AG
Users that are interested in Cross-View-AG are comparing it to the libraries listed below
Sorting:
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆45Updated last year
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆46Updated 2 years ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Updated last year
- This the official repository of OCL (ICCV 2023).☆25Updated last year
- ☆62Updated last year
- ☆44Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆159Updated 3 months ago
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)☆37Updated 7 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆75Updated last month
- ☆43Updated 6 months ago
- [CVPR-2025] GREAT: Geometry-Intention Collaborative Inference for Open-Vocabulary 3D Object Affordance Grounding☆34Updated 5 months ago
- [IROS 2023] Open-Vocabulary Affordance Detection in 3d Point Clouds☆82Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆41Updated 4 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆91Updated 7 months ago
- ☆33Updated last year
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆95Updated 6 months ago
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated 11 months ago
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆45Updated 7 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆47Updated last month
- [WIP] Code for LangToMo☆20Updated 6 months ago
- OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding☆19Updated last month
- [NeurIPS 2024] Official code repository for MSR3D paper☆69Updated last month
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated 2 years ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆120Updated last year
- official repo for AGNOSTOS, a cross-task manipulation benchmark, and X-ICM method, a cross-task in-context manipulation (VLA) method☆53Updated last month
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆203Updated 4 months ago
- This is the official repo for [CoRL 2024] Contrastive Imitation Learning for Language-guided Multi-Task Robotic Manipulation☆32Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆114Updated 9 months ago