TtuHamg / TextToucherLinks
Official Pytorch Implementation for "TextToucher: Fine-Grained Text-to-Touch Generation" (AAAI 2025)
☆14Updated 2 months ago
Alternatives and similar repositories for TextToucher
Users that are interested in TextToucher are comparing it to the libraries listed below
Sorting:
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆78Updated 3 weeks ago
- Boosting the Class-Incremental Learning in 3D Point Clouds via Zero-Collection-Cost Basic Shape Pre-Training☆10Updated 6 months ago
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆53Updated 4 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated last month
- ☆21Updated 7 months ago
- Responsible Robotic Manipulation☆11Updated 3 weeks ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆64Updated 3 weeks ago
- Code for Stable Control Representations☆25Updated 2 months ago
- WorldVLA: Towards Autoregressive Action World Model☆94Updated this week
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆83Updated 2 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆67Updated 6 months ago
- V-MAGE: A Game Evaluation Framework for Assessing Visual-Centric Capabilities in MLLMs☆19Updated last month
- ☆74Updated 9 months ago
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆33Updated last year
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆65Updated this week
- Official code for MotionBench (CVPR 2025)☆45Updated 3 months ago
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆37Updated 4 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".