fredfyyang / Touch-and-Go
☆27Updated last year
Alternatives and similar repositories for Touch-and-Go
Users that are interested in Touch-and-Go are comparing it to the libraries listed below
Sorting:
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆76Updated 3 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆51Updated 3 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆59Updated 6 months ago
- This the official repository of OCL (ICCV 2023).☆20Updated last year
- ☆46Updated 5 months ago
- ☆31Updated 9 months ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆34Updated 9 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆37Updated 2 years ago
- Latent Motion Token as the Bridging Language for Robot Manipulation☆85Updated last week
- Preview code of ECCV'24 paper "Distill Gold from Massive Ores" (BiLP)☆24Updated 10 months ago
- The repo for "AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors", ICLR 2025☆42Updated last month
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆14Updated 4 months ago
- Dense Policy: Bidirectional Autoregressive Learning of Actions☆35Updated last month
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆83Updated 10 months ago
- AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆71Updated last month
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆64Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆76Updated last month
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆31Updated 11 months ago
- 2D version of Dense Policy☆14Updated last month
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated 3 months ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆132Updated last year
- [CoRL 2023 Oral] GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields☆132Updated last year
- [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation …☆37Updated 2 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated 9 months ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆131Updated 10 months ago
- ☆83Updated last week
- [CVPR 2024] Dataset and Code for "Language-driven Grasp Detection."☆32Updated 3 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆103Updated 6 months ago
- ☆121Updated last year
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆111Updated 7 months ago