cfeng16 / UniTouchLinks
[CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations
☆69Updated 9 months ago
Alternatives and similar repositories for UniTouch
Users that are interested in UniTouch are comparing it to the libraries listed below
Sorting:
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆85Updated 5 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆110Updated 7 months ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- ☆39Updated 4 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆44Updated 2 years ago
- ☆60Updated 11 months ago
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆38Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆41Updated 2 months ago
- ☆45Updated 7 months ago
- Efficiently apply modification functions to RLDS/TFDS datasets.☆36Updated last year
- ☆33Updated last year
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆46Updated 2 years ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆68Updated last year
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆66Updated 10 months ago
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆91Updated 4 months ago
- ☆84Updated last year
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆121Updated last year
- [ECCV 2024] 🎉 Official repository of "Robo-ABC: Affordance Generalization Beyond Categories via Semantic Correspondence for Robot Manipu…☆92Updated 11 months ago
- Official Repository for SAM2Act☆212Updated 2 months ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆97Updated last year
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆179Updated 2 months ago
- ☆20Updated 3 weeks ago
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)☆28Updated 5 months ago
- ICCV2025☆142Updated this week
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Updated 11 months ago
- Official implementation of the paper: Task Reconstruction and Extrapolation for $\pi_0$ using Text Latent (https://arxiv.org/pdf/2505.035…☆83Updated 3 months ago
- The repo for "AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors", ICLR 2025☆70Updated 4 months ago
- [CVPR 2024] Dataset and Code for "Language-driven Grasp Detection."☆47Updated 9 months ago
- [WIP] Code for LangToMo☆20Updated 4 months ago
- official repo for AGNOSTOS, a cross-task manipulation benchmark, and X-ICM method, a cross-task in-context manipulation (VLA) method☆50Updated 2 weeks ago