kahnchana / LangToMoLinks
[WIP] Code for LangToMo
☆16Updated last month
Alternatives and similar repositories for LangToMo
Users that are interested in LangToMo are comparing it to the libraries listed below
Sorting:
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆18Updated 6 months ago
- ☆53Updated 7 months ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆41Updated last year
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆16Updated 4 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆96Updated 3 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆40Updated 2 years ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆120Updated 2 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆64Updated 9 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆35Updated last month
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆33Updated 9 months ago
- ☆26Updated last year
- ☆77Updated 11 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆60Updated 7 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆68Updated last year
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆50Updated 3 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆55Updated 6 months ago
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆45Updated last year
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated 6 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆34Updated 7 months ago
- [MMM 2025 Best Paper] RoLD: Robot Latent Diffusion for Multi-Task Policy Modeling☆20Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆33Updated 10 months ago
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆93Updated last year
- ☆34Updated 10 months ago
- Are Synthetic Data Useful for Egocentric Hand-Object Interaction Detection? [ECCV, 2024]☆13Updated 6 months ago
- Official implementation of GR-MG☆85Updated 6 months ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆115Updated 10 months ago
- ☆106Updated last month
- Mirage: a zero-shot cross-embodiment policy transfer method. Benchmarking code for cross-embodiment policy transfer.☆23Updated last year
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆245Updated 4 months ago