alishbaimran / Multi-Modal-Manipulation

We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We train a policy in PyBullet (on the a Kuka LBR iiwa robot arm) using PPO for peg-in-hole tasks.
9Updated last year

Alternatives and similar repositories for Multi-Modal-Manipulation:

Users that are interested in Multi-Modal-Manipulation are comparing it to the libraries listed below