feel-the-force-ftf / feel-the-forceView external linksLinks
☆30Jun 8, 2025Updated 8 months ago
Alternatives and similar repositories for feel-the-force
Users that are interested in feel-the-force are comparing it to the libraries listed below
Sorting:
- Code for the paper Sample Efficient Grasp Learning Using Equivariant Models☆41Apr 22, 2024Updated last year
- DemoHLM: From One Demonstration to Generalizable Humanoid Loco-Manipulation☆22Oct 14, 2025Updated 4 months ago
- Skill-based Teleoperation☆42Dec 4, 2025Updated 2 months ago
- ☆58Jan 14, 2026Updated last month
- Franka simulator in Drake compatible with existing libfranka programs☆24Aug 29, 2025Updated 5 months ago
- Specialized encoders for robot manipulation. Sparsh-Skin An encoder tailored for magnetic tactile sensors to understand interactions from…☆25Aug 20, 2025Updated 5 months ago
- A C++library for accessing and controlling Elite CS series robots.☆15Jan 27, 2026Updated 2 weeks ago
- Blazing fast, analytical Inverse Kinematics for Franka Emika Panda and FR3 robots. Lightweight Python bindings, no ROS required.☆27Jan 1, 2026Updated last month
- Pi0-VLA Repository of "MotionTrans: Human VR Data Enable Motion-Level Learning for Robotic Manipulation Policies"☆26Sep 25, 2025Updated 4 months ago
- USD models for NVIDIA Isaac Sim☆16Aug 8, 2025Updated 6 months ago
- NeurIPS 2024: Few-Shot Task Learning through Inverse Generative Modeling☆17Dec 6, 2024Updated last year
- Code for the robot-assisted feeding project at EmPRISE Lab☆26Dec 23, 2025Updated last month
- ☆19May 7, 2025Updated 9 months ago
- Torobo models and example scripts in MuJoCo☆76Jun 9, 2025Updated 8 months ago
- ☆32Jan 12, 2026Updated last month
- Author's implementation of DemoDiffusion.☆61Jan 14, 2026Updated last month
- ☆43Aug 26, 2024Updated last year
- This repository provides tutorials for reinforcement learning and imitation learning using ROBOTIS robots, and supports Sim2Real function…☆86Jan 29, 2026Updated 2 weeks ago
- Official code release for CoRL'25 paper: VT-Refine: Learning Bimanual Assembly with Visuo-Tactile Feedback via Simulation Fine-Tuning☆96Oct 18, 2025Updated 3 months ago
- ☆155Nov 10, 2024Updated last year
- ☆22Jun 14, 2025Updated 8 months ago
- ☆22Oct 27, 2022Updated 3 years ago
- [ICLR 2025🎉] This is the official implementation of paper "Robots Pre-Train Robots: Manipulation-Centric Robotic Representation from Lar…☆90Jan 22, 2025Updated last year
- Official Github repository for "CHILD: a Whole-Body Humanoid Teleoperation System". (Humanoids 2025)☆39Jan 30, 2026Updated 2 weeks ago
- (RA-L 2025) UniT: Data Efficient Tactile Representation with Generalization to Unseen Objects☆61Apr 16, 2025Updated 10 months ago
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)☆26Nov 27, 2024Updated last year
- ☆55Oct 25, 2025Updated 3 months ago
- speed-running solving robot manipulation tasks☆24Oct 31, 2024Updated last year
- [CVPR 2025] VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation☆45Jun 20, 2025Updated 7 months ago
- Code for "Transitive RL: Value Learning via Divide and Conquer"☆48Oct 31, 2025Updated 3 months ago
- Official implementation of Get a Grip: Multi-Finger Grasp Evaluation at Scale Enables Robust Sim-to-Real Transfer☆32Feb 12, 2025Updated last year
- FACTR Hardware☆77Jun 15, 2025Updated 8 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆108Mar 17, 2025Updated 10 months ago
- RAMBO: RL-augmented Model-based Whole-body Control for Loco-manipulation☆103Jul 23, 2025Updated 6 months ago
- Official implementation of Click-and-Traverse☆111Feb 4, 2026Updated last week
- Official Repository for DenseMatcher Learning 3D Semantic Correspondence for Category-Level Manipulation from One Demo☆114Feb 17, 2025Updated 11 months ago
- Official Code for "Relative Entropy Pathwise Policy Optimization"☆42Jan 17, 2026Updated 3 weeks ago
- F3RM: Feature Fields for Robotic Manipulation - Website☆23Nov 25, 2023Updated 2 years ago
- Official code for PDP☆64Feb 3, 2025Updated last year