├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Alishba Imran 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Multi-Modal-Manipulation 2 | 3 | We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We train a policy in PyBullet (on the a Kuka LBR iiwa robot arm) using PPO for peg-in-hole tasks. This implementation can also be used to understand force-torque (F/T) control for contact-rich manipulation tasks as at each step the force-torque (F/T) reading is captured at the joint connected with the end-effector. 4 | 5 | How it works: it uses self-supervision to learn a compact and multimodal representation of our sensory inputs. This can improve the sample efficiency of our policy learning. We train a policy in #PyBullet (on a Kuka LBR iiwa robot arm) using PPO for peg-in-hole tasks. 6 | 7 | demo 8 | 9 | 10 | This implementation can also be used to understand force-torque (F/T) control for contact-rich manipulation tasks as at each step the force-torque (F/T) reading is captured at the joint connected with the end-effector. 11 | 12 | ![f:t-readings](https://user-images.githubusercontent.com/44557946/133675110-7faaf2a8-86fe-471d-b586-c96f8177bead.JPG) 13 | > 14 | 15 | # Instructions 16 | 17 | To add Robotiq see: 18 | https://github.com/Alchemist77/pybullet-ur5-equipped-with-robotiq-140/blob/master/urdf/robotiq_140_gripper_description/urdf/robotiq_140.urdf 19 | 20 | To add S-RL Toolbox see: https://s-rl-toolbox.readthedocs.io/en/latest/ 21 | 22 | 1. Download the project-master folder in the master branch. Note: you will need anaconda to run this program. I recommend installing it, creating/initializing an environment, installing python 3.6 in it as that is the version you'll need. 23 | 2. cd into the folder on your local laptop 24 | 3. Run `pip install -r requirements.txt`. You will also have to install pybullet `pip install pybullet`, gym `pip install gym`, opencv `pip install opencv-python`, pytorch `pip install torch torchvision`. 25 | 4. Run `python train_peg_insertion.py` to train the agent. If you get any errors, you might have to change any paths that are specified for me to your own. 26 | 5. To collect the multimodal dataset for encoder pre-train run `python environments/kuka_peg_env.py`.You will be able to get more data by changing the random seed. 27 | 6. To pre-train the fusion encoder run `python multimodal/train_my_fusion_model.py` You have to specify the path to the root directory of multimodal dataset. 28 | 29 | **Quick Notes:** 30 | - This code was built on from the implementation here https://github.com/Henry1iu/ierg5350_rl_course_project 31 | - DDPG code implementation: https://github.com/ghliu/pytorch-ddpg 32 | --------------------------------------------------------------------------------