silicx / ObjectConceptLearningLinks
This the official repository of OCL (ICCV 2023).
☆25Updated last year
Alternatives and similar repositories for ObjectConceptLearning
Users that are interested in ObjectConceptLearning are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆68Updated 10 months ago
- ☆35Updated 2 months ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆154Updated 3 weeks ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆41Updated 2 years ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆35Updated last week
- ☆55Updated 9 months ago
- [CVPR 2025🎉] Official implementation for paper "Point-Level Visual Affordance Guided Retrieval and Adaptation for Cluttered Garments Man…☆38Updated 6 months ago
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆88Updated 3 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆59Updated 7 months ago
- ☆80Updated last year
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆43Updated last year
- [RA-L 2025] Motion Before Action: Diffusing Object Motion as Manipulation Condition☆58Updated 2 months ago
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆35Updated 11 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated last year
- ☆73Updated 11 months ago
- [ECCV 2024] 🎉 Official repository of "Robo-ABC: Affordance Generalization Beyond Categories via Semantic Correspondence for Robot Manipu…☆88Updated 10 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆105Updated 5 months ago
- ☆33Updated last year
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated last year
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆86Updated last year
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆62Updated 9 months ago
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆94Updated last year
- ☆85Updated last year
- [CoRL 2023 Oral] GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields☆135Updated last year
- ICCV2025☆133Updated last month
- ☆16Updated last week
- Official implementation of "Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy."☆105Updated 3 weeks ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆74Updated 9 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆30Updated last week