One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)
☆48Jul 30, 2024Updated last year
Alternatives and similar repositories for OOAL
Users that are interested in OOAL are comparing it to the libraries listed below
Sorting:
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆47Apr 28, 2023Updated 2 years ago
- [IROS 2023] Open-Vocabulary Affordance Detection in 3d Point Clouds☆82Sep 4, 2024Updated last year
- (Incomplete version) This is an implementation of affordancellm.☆18Oct 17, 2024Updated last year
- [ICLR 2025 Oral] Official Implementation for "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Un…☆21Oct 24, 2024Updated last year
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆74Nov 1, 2024Updated last year
- [ICRA 2024] Language-Conditioned Affordance-Pose Detection in 3D Point Clouds☆50Jan 10, 2025Updated last year
- [AAAI 2024]Weakly Supervised Multimodal Affordance Grounding for Egocentric Images☆13Nov 10, 2024Updated last year
- Code for Stable Control Representations☆26Apr 5, 2025Updated 10 months ago
- For imitation learning using real robot (PR2 on ROS1 only now)☆11Mar 9, 2025Updated 11 months ago
- Reimplementation of facebook's DinoV2 in JAX. Inference (with pretrained weights) only; training is unsupported.☆12Jun 25, 2024Updated last year
- [ICLR 2025] Official code of "Segment any 3D Object with Language"☆67Oct 11, 2025Updated 4 months ago
- Subtask-Aware Visual Reward Learning from Segmented Demonstrations (ICLR 2025 accepted)☆18Apr 11, 2025Updated 10 months ago
- ☆11Jul 19, 2023Updated 2 years ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Jan 29, 2024Updated 2 years ago
- Official PyTorch implementation of EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views☆30Sep 26, 2024Updated last year
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆58Jun 7, 2025Updated 8 months ago
- ☆48Jul 4, 2025Updated 7 months ago
- [ECCV'24] 3D Reconstruction of Objects in Hands without Real World 3D Supervision☆17Feb 3, 2025Updated last year
- Vision-Language-Action Optimization with Trajectory Ensemble Voting☆25Feb 18, 2026Updated 2 weeks ago
- [NeurIPS 2024] Understanding Multi-Granularity for Open-Vocabulary Part Segmentation☆60Dec 29, 2024Updated last year
- [ICRA 2022] Implementation of Affordance Learning from Play for Sample-Efficient Policy Learning☆28Apr 19, 2022Updated 3 years ago
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆92Jun 24, 2024Updated last year
- [ECCV 2024] Language-Driven 6-DoF Grasp Detection Using Negative Prompt Guidance☆40Sep 7, 2024Updated last year
- ☆14Feb 13, 2025Updated last year
- ☆33Dec 4, 2025Updated 2 months ago
- [CVPR 2024] The official implementation of paper "Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training"☆36Apr 21, 2024Updated last year
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆36Jan 22, 2025Updated last year
- ☆44Aug 8, 2024Updated last year
- This repository contains the official implementation, data generation tools, and benchmark datasets for our research on synthetic data fo…☆15Feb 4, 2026Updated 3 weeks ago
- The Pytorch implementation of Grounding 3D Object Affordance from 2D Interactios in Images.☆135Nov 17, 2023Updated 2 years ago
- [CVPR 2025] "DepthCues: Evaluating Monocular Depth Perception in Large Vision Models", Duolikun Danier, Mehmet Aygün, Changjian Li, Hakan…☆21Mar 17, 2025Updated 11 months ago
- ☆19Jul 7, 2024Updated last year
- Official implementation of the WACV 2025 paper "3D Part Segmentation via Geometric Aggregation of 2D Visual Features"☆25Jun 8, 2025Updated 8 months ago
- ☆16May 26, 2023Updated 2 years ago
- ☆19Apr 2, 2024Updated last year
- ☆19Feb 6, 2025Updated last year
- [CVPR 2024] Dataset and Code for "Language-driven Grasp Detection."☆48Feb 9, 2025Updated last year
- Official Code for the NeurIPS'23 paper "3D-Aware Visual Question Answering about Parts, Poses and Occlusions"☆19Oct 17, 2024Updated last year
- [CVPR 2025] GigaHands: A Massive Annotated Dataset of Bimanual Hand Activities☆111Sep 11, 2025Updated 5 months ago