fredfyyang / Touch-and-GoLinks
☆29Updated last year
Alternatives and similar repositories for Touch-and-Go
Users that are interested in Touch-and-Go are comparing it to the libraries listed below
Sorting:
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆83Updated 3 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆67Updated 10 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆58Updated 6 months ago
- Preview code of ECCV'24 paper "Distill Gold from Massive Ores" (BiLP)☆25Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆143Updated last year
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆42Updated last year
- ☆71Updated 8 months ago
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆88Updated last year
- [ICLR 2025 Oral] Official Implementation for "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Un…☆15Updated 10 months ago
- ☆55Updated 8 months ago
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- ☆134Updated 2 years ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆129Updated 3 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated 11 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆68Updated last week
- ☆42Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆37Updated 6 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆40Updated 2 years ago
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆34Updated last year
- ☆125Updated last year
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆93Updated last year
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆248Updated 5 months ago
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆53Updated 7 months ago
- Source code for the Paper "Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models"☆16Updated last week
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆245Updated 8 months ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆33Updated 11 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆102Updated 4 months ago
- ☆80Updated last month
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆138Updated 3 weeks ago