☆71Feb 6, 2026Updated last month
Alternatives and similar repositories for octopi
Users that are interested in octopi are comparing it to the libraries listed below
Sorting:
- ☆63Sep 18, 2025Updated 5 months ago
- An official implementation of Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation☆32Jun 12, 2024Updated last year
- The repo for "AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors", ICLR 2025☆84Jan 13, 2026Updated last month
- Incorporating Tactile Signals into the ACT framework for peg insertion tasks☆43Aug 23, 2024Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆83Nov 20, 2025Updated 3 months ago
- For Visuo-Tactile Transformers for Manipulation☆37Nov 15, 2022Updated 3 years ago
- The official repository for ManiSkill-ViTac2025☆51Mar 14, 2025Updated 11 months ago
- Sparsh Self-supervised touch representations for vision-based tactile sensing☆202Feb 27, 2025Updated last year
- Simulation studies for research "Tac-Man: Tactile-Informed Prior-Free Manipulation of Articulated Objects".☆40Nov 20, 2025Updated 3 months ago
- ☆70Jun 30, 2024Updated last year
- Repository for Transferable Tactile Transformers (T3)☆57Jun 21, 2024Updated last year
- (T-RO 2024) LeTac-MPC: Learning Model Predictive Control for Tactile-reactive Grasping☆45Sep 20, 2024Updated last year
- ☆21Dec 23, 2025Updated 2 months ago
- Tactile Sensing • Simulation • Representation • Manipulation • IL/RL/VLA/WM • Open Source☆606Updated this week
- Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation☆144Nov 13, 2024Updated last year
- PoseIt a multi-modal dataset that contains visual tactile data for holding poses☆13Feb 9, 2023Updated 3 years ago
- ☆42Feb 14, 2026Updated 3 weeks ago
- [RSS 2025] Reactive Diffusion Policy: Slow-Fast Visual-Tactile Policy Learning for Contact-Rich Manipulation☆299Feb 22, 2026Updated 2 weeks ago
- ☆70Jun 11, 2024Updated last year
- Official Pytorch Implementation for "TextToucher: Fine-Grained Text-to-Touch Generation" (AAAI 2025)☆19Jan 28, 2026Updated last month
- Official Repo for ManiSkill-ViTac Challenge 2024☆53Apr 9, 2024Updated last year
- ☆91Aug 3, 2025Updated 7 months ago
- ☆91Jan 20, 2021Updated 5 years ago
- A curated collection of resources, papers, and tools on dexterous manipulation.☆38Jan 13, 2026Updated last month
- Code for the robot-assisted feeding project at EmPRISE Lab☆28Updated this week
- Specialized encoders for robot manipulation. Sparsh-Skin An encoder tailored for magnetic tactile sensors to understand interactions from…☆28Aug 20, 2025Updated 6 months ago
- Code for the paper "Trust the PRoC3S: Solving Long-Horizon Robotics Problems with LLMs and Constraint Satisfaction" presented at CoRL 202…☆31Nov 18, 2024Updated last year
- GelSight SDK for robotic sensors☆170Jun 25, 2025Updated 8 months ago
- This is the official codebase for the paper "Sensor-Invariant Tactile Representation" (ICLR 2025).☆24Sep 29, 2025Updated 5 months ago
- AllSight, is an optical tactile sensor with a round 3D structure, potentially designed for robotic in-hand manipulation tasks☆17Nov 28, 2025Updated 3 months ago
- ☆60Jan 14, 2026Updated last month
- [RSS 2025] TactAR teleopeartion APP in "Reactive Diffusion Policy: Slow-Fast Visual-Tactile Policy Learning for Contact-Rich Manipulation…☆70Jul 11, 2025Updated 7 months ago
- DexUMI: Using Human Hand as the Universal Manipulation Interface for Dexterous Manipulation☆180Oct 2, 2025Updated 5 months ago
- Collection of MuJoCo robotics environments equipped with both vision and tactile sensing☆90Jul 8, 2024Updated last year
- The Power of the Senses: Generalizable Manipulation from Vision and Touch through Masked Multimodal Learning☆40Aug 13, 2024Updated last year
- Official code repository of paper "D(R, O) Grasp: A Unified Representation of Robot and Object Interaction for Cross-Embodiment Dexterous…☆261Nov 13, 2025Updated 3 months ago
- ☆118Nov 2, 2022Updated 3 years ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆95Jun 2, 2025Updated 9 months ago
- ☆47May 13, 2024Updated last year