fredfyyang / Touch-and-GoLinks
☆36Updated 2 years ago
Alternatives and similar repositories for Touch-and-Go
Users that are interested in Touch-and-Go are comparing it to the libraries listed below
Sorting:
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆92Updated 7 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆79Updated 2 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆72Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆154Updated 2 years ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- [ICLR 2025] This repo is the official implementation of "The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs".☆13Updated last year
- This the official repository of OCL (ICCV 2023).☆25Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆79Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆132Updated 8 months ago
- ☆62Updated last year
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆77Updated 5 months ago
- ☆149Updated 2 years ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆114Updated 9 months ago
- Preview code of ECCV'24 paper "Distill Gold from Massive Ores" (BiLP)☆25Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 11 months ago
- This is the official impletations of the EMNLP Findings paper, VideoINSTA: Zero-shot Long-Form Video Understanding via Informative Spatia…☆24Updated last year
- Data pre-processing and training code on Open-X-Embodiment with pytorch☆11Updated last year
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆23Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Updated last year
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆54Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆161Updated 3 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆45Updated last year
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆36Updated 2 years ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆79Updated 8 months ago
- official repo for AGNOSTOS, a cross-task manipulation benchmark, and X-ICM method, a cross-task in-context manipulation (VLA) method☆55Updated 2 months ago
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆19Updated last year
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆63Updated last year
- [ICLR'25] Do Egocentric Video-Language Models Truly Understand Hand-Object Interactions?☆12Updated 9 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆46Updated 2 years ago