[ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment
☆95Jun 2, 2025Updated 9 months ago
Alternatives and similar repositories for tvl
Users that are interested in tvl are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆83Nov 20, 2025Updated 3 months ago
- An official implementation of Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation☆32Jun 12, 2024Updated last year
- Self-Supervised Visual-Tactile Representation Learning via Multimodal Contrastive Training☆24Apr 26, 2024Updated last year
- The repo for "AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors", ICLR 2025☆83Jan 13, 2026Updated last month
- PoseIt a multi-modal dataset that contains visual tactile data for holding poses☆13Feb 9, 2023Updated 3 years ago
- Official Pytorch Implementation for "TextToucher: Fine-Grained Text-to-Touch Generation" (AAAI 2025)☆18Jan 28, 2026Updated last month
- ☆71Feb 6, 2026Updated 3 weeks ago
- ☆36Sep 5, 2023Updated 2 years ago
- ☆56Apr 18, 2025Updated 10 months ago
- ObjectFolder Dataset☆169Aug 31, 2022Updated 3 years ago
- Repository for Transferable Tactile Transformers (T3)☆57Jun 21, 2024Updated last year
- Simulation studies for research "Tac-Man: Tactile-Informed Prior-Free Manipulation of Articulated Objects".☆40Nov 20, 2025Updated 3 months ago
- Sparsh Self-supervised touch representations for vision-based tactile sensing☆202Feb 27, 2025Updated last year
- try to train the FoundationPose's refiner☆17Mar 17, 2025Updated 11 months ago
- ☆20Oct 15, 2025Updated 4 months ago
- The code implementation of Capturing forceful interaction with deformable objects using a deep learning-powered stretchable tactile array☆28Sep 30, 2024Updated last year
- ☆118Nov 2, 2022Updated 3 years ago
- Tactile Sensing • Simulation • Representation • Manipulation • IL/RL/VLA/WM • Open Source☆601Updated this week
- Subtask-Aware Visual Reward Learning from Segmented Demonstrations (ICLR 2025 accepted)☆18Apr 11, 2025Updated 10 months ago
- Tactile perception dataset, comprising of the DIGIT sliding over YCB objects with ground-truth pose.☆27Sep 27, 2024Updated last year
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆132Apr 10, 2025Updated 10 months ago
- ☆11Sep 28, 2023Updated 2 years ago
- Robot In-hand Rotation☆100Jul 31, 2024Updated last year
- Official repository for "Boosting Audio Visual Question Answering via Key Semantic-Aware Cues" in ACM MM 2024.☆16Oct 25, 2024Updated last year
- Code for paper "Out-of-Domain Robustness via Targeted Augmentations"☆14Feb 25, 2023Updated 3 years ago
- Official code for TLDR: Unsupervised Goal-Conditioned RL via Temporal Distance-Aware Representations☆36Jan 24, 2026Updated last month
- SpawnNet: Learning Generalizable Visuomotor Skills from Pre-trained Networks☆36Apr 29, 2024Updated last year
- ☆61Jan 15, 2024Updated 2 years ago
- ☆63Sep 18, 2025Updated 5 months ago
- [ACL 24 Findings] Implementation of Resonance RoPE and the PosGen synthetic dataset.☆24Mar 5, 2024Updated 2 years ago
- [ICRA 2024] Dream2Real: Zero-Shot 3D Object Rearrangement with Vision-Language Models☆68Feb 13, 2024Updated 2 years ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆115Apr 14, 2025Updated 10 months ago
- ☆14Jun 22, 2023Updated 2 years ago
- [TCSVT 2024] Temporally Consistent Referring Video Object Segmentation with Hybrid Memory☆19Apr 9, 2025Updated 10 months ago
- [ICLR 2025 Oral] Official Implementation for "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Un…☆21Oct 24, 2024Updated last year
- ☆19Jun 16, 2023Updated 2 years ago
- Code for Visuotactile-Based Learning for Insertion with Compliant Hands☆21May 20, 2025Updated 9 months ago
- Realtime & high-frequency control interfaces for the YuMi IRB 14000 bi-manual robot arm including manual tele-operation and autonomous Di…☆26Sep 24, 2025Updated 5 months ago
- Implementation of paper: VLA-Touch: Enhancing Vision-Language-Action Models with Dual-Level Tactile Feedback☆58Jan 4, 2026Updated 2 months ago