fredfyyang / Touch-and-GoLinks
☆32Updated 2 years ago
Alternatives and similar repositories for Touch-and-Go
Users that are interested in Touch-and-Go are comparing it to the libraries listed below
Sorting:
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆85Updated 5 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆66Updated 9 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆68Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 8 months ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆151Updated 2 years ago
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆74Updated 11 months ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- [ICLR 2025] This repo is the official implementation of "The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs".☆13Updated 9 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Updated last year
- Preview code of ECCV'24 paper "Distill Gold from Massive Ores" (BiLP)☆25Updated last year
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆91Updated last year
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated last year
- ☆60Updated 10 months ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆93Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆131Updated 5 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆70Updated 2 months ago
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆33Updated last year
- ☆143Updated 2 years ago
- 🔍 Explore Egocentric Vision: research, data, challenges, real-world apps. Stay updated & contribute to our dynamic repository! Work-in-p…☆120Updated 11 months ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆44Updated 2 years ago
- ☆46Updated last year
- This the official repository of OCL (ICCV 2023).☆25Updated last year
- Source code for the Paper "Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models"☆16Updated last month
- An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment☆18Updated 10 months ago
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆257Updated this week
- ☆30Updated last year
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆98Updated 2 months ago