fredfyyang / Touch-and-GoLinks
☆31Updated 2 years ago
Alternatives and similar repositories for Touch-and-Go
Users that are interested in Touch-and-Go are comparing it to the libraries listed below
Sorting:
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆84Updated 4 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆67Updated 11 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆63Updated 8 months ago
- Preview code of ECCV'24 paper "Distill Gold from Massive Ores" (BiLP)☆25Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆148Updated 2 years ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆43Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆73Updated 10 months ago
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆130Updated 5 months ago
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆38Updated 7 months ago
- ☆58Updated 10 months ago
- This the official repository of OCL (ICCV 2023).☆25Updated last year
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆90Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆135Updated 2 weeks ago
- Official Code for the NeurIPS'23 paper "3D-Aware Visual Question Answering about Parts, Poses and Occlusions"☆19Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆44Updated 2 years ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆106Updated 6 months ago
- ☆45Updated last year
- [ICLR 2025] This repo is the official implementation of "The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs".☆13Updated 8 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆70Updated last month
- ☆141Updated 2 years ago
- ☆54Updated last year
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 6 months ago
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆18Updated 9 months ago
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆57Updated 8 months ago
- EgoTV Egocentric Task Verification from Natural Language Task Descriptions☆27Updated last year
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated last year
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated 8 months ago