alishbaimran / Multi-Modal-ManipulationView on GitHub
We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We train a policy in PyBullet (on the a Kuka LBR iiwa robot arm) using PPO for peg-in-hole tasks.
12Feb 8, 2024Updated 2 years ago

Alternatives and similar repositories for Multi-Modal-Manipulation

Users that are interested in Multi-Modal-Manipulation are comparing it to the libraries listed below

Sorting:

Are these results useful?