├── README.md
├── images
├── 653839573.jpg
├── Screenshot from 2021-11-19 10-03-58.png
├── Screenshot from 2021-11-19 15-11-12.png
├── Screenshot from 2021-11-19 15-13-55.png
├── Screenshot from 2021-11-19 15-14-46.png
├── Screenshot from 2021-11-19 16-07-03.png
└── Screenshot from 2022-04-12 19-11-45.png
└── planr_grasp_ggcnn
├── .idea
├── .gitignore
├── ggcnn.iml
├── inspectionProfiles
│ ├── Project_Default.xml
│ └── profiles_settings.xml
├── misc.xml
├── modules.xml
└── vcs.xml
├── eval_ggcnn.py
├── ggcnn2_weights_cornell
├── epoch_50_cornell
└── epoch_50_cornell_statedict.pt
├── ggcnn_pickNplace_gen3lite_full_arm_movement.py
├── ggcnn_weights_cornell
├── ggcnn_epoch_23_cornell
└── ggcnn_epoch_23_cornell_statedict.pt
├── models
├── __init__.py
├── __pycache__
│ ├── __init__.cpython-36.pyc
│ ├── __init__.cpython-37.pyc
│ ├── common.cpython-36.pyc
│ ├── common.cpython-37.pyc
│ ├── ggcnn.cpython-37.pyc
│ └── ggcnn2.cpython-37.pyc
├── common.py
├── ggcnn.py
└── ggcnn2.py
├── requirements.txt
├── samples
├── 0.png
├── 0.tiff
├── 1.png
├── 1.tiff
├── 2.png
├── 2.tiff
├── 3.png
├── 3.tiff
├── 4.png
└── 4.tiff
├── train_ggcnn.py
├── utils
├── __init__.py
├── __pycache__
│ └── __init__.cpython-37.pyc
├── data
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── __init__.cpython-37.pyc
│ │ ├── grasp_data.cpython-37.pyc
│ │ └── jacquard_data.cpython-37.pyc
│ ├── cornell_data.py
│ ├── grasp_data.py
│ └── jacquard_data.py
├── dataset_processing
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── __init__.cpython-37.pyc
│ │ ├── evaluation.cpython-37.pyc
│ │ ├── grasp.cpython-37.pyc
│ │ └── image.cpython-37.pyc
│ ├── evaluation.py
│ ├── generate_cornell_depth.py
│ ├── grasp.py
│ └── image.py
├── timeit.py
└── visualisation
│ ├── __init__.py
│ └── gridshow.py
└── vis_ggcnn.py
/README.md:
--------------------------------------------------------------------------------
1 | # HandEyeCalib-ROS
2 |
3 | ## Introduction
4 | This repo provides a detailed tutorial to eye-in-hand calibration, eye-to-hand calibration, and downstream grasp execution. The ultimate goal is to build a handy toolbox for either desktop or mobile grasping/manipulation. We will update this repo from time to time to make sure the latest progress is included. Feel free to leave any questions or comments, we are willing to discuss.
5 |
6 | ## Progress
7 | - [X] Eye-hand calibration with built-in aruco detecter (for opencv version after 3.2.0)
8 | - [X] Eye-in-hand calibration with `aruco_ros` package (for ubuntu 18.04 with default opencv 3.2.0)
9 | - [X] Eye-to-hand calibration with `aruco_ros` package (for ubuntu 18.04 with default opencv 3.2.0)
10 | - [X] open-loop planar grasp execution
11 | - [ ] open-loop 6-DOF grasp execution
12 | - [ ] close-loop planar grasp execution
13 | - [ ] close-loop 6-DoF grasp execution
14 |
15 | ### Setup
16 | * system: Ubuntu 18.04
17 | * ROS version: melodic
18 | * OpenCV version: 3.2.0
19 | * Robot: KINOVA Gen3 & Gen3 Lite
20 | * Camera: Intel Realsense D435i
21 |
22 | ### Pre-requisites
23 | * [moveit_calibration](https://github.com/ros-planning/moveit_calibration): master branch
24 | * [rviz_visual_tools](https://github.com/PickNikRobotics/rviz_visual_tools/tree/master): master branch
25 | * [geometric_shapes](https://github.com/ros-planning/geometric_shapes/tree/melodic-devel): melodic-devel
26 | * [moveit_visual_tools](https://github.com/ros-planning/moveit_visual_tools): melodic-devel
27 |
28 | Make sure you git clone the right branch and put all the packages under the same ROS worksapce.
29 |
30 | ## Eye-hand calibration with built-in aruco detecter (for opencv version after 3.2.0)
31 |
32 | | **Warning to Melodic users** |
33 | | --- |
34 | | OpenCV 3.2, which is the version in Ubuntu 18.04, has a buggy ArUco board pose detector. Do not expect adequate results if you are using an ArUco board with OpenCV 3.2. Jump directly to eye-hand calibration with `aruco_ros` package. **We got your back!**|
35 |
36 | Moveit provides a convenient eye-hand calibration package. It's a good idea to follow this [tutorial](https://ros-planning.github.io/moveit_tutorials/doc/hand_eye_calibration/hand_eye_calibration_tutorial.html) step by step.
37 |
38 |
39 |
40 |
41 |
42 |
43 | The whole process includes: start your camera, set tag parameters, print Aruco tag, set context configuration, take at least 5 samples, solve camera pose, save cameras poses & joint states. It uses opencv built-in aruco detector, which is buggy in opencv 3.2.0. We actually tested on opencv 3.2.0 and it gave very bad results.
44 |
45 | **Take care of which sensor frame you are using.** `Moveit calibration` uses the right-down-forward standard as specified [here](https://www.ros.org/reps/rep-0103.html). You might get totally different frame coordinates before and after loading camera into `HandEyeCalibration` For Intel Realsesne D435i, we found the frame coordinates of `camea_color_frame` has x axis pointing outwards while it is forced to z axis pointing outwards after being loaded. For this reason, we suggest using `camera_color_optical_frame`. Both `camea_color_frame` and `camera_color_optical_frame` are not located at camera center since RGB camera is on rightmost side of D534i.
46 |
47 |
48 |
49 |
50 |
51 |
52 | ## Eye-in-hand calibration with `aruco_ros` package (for ubuntu 18.04 with default opencv 3.2.0)
53 | Since the built-in aruco detecter is buggy in opencv 3.2.0, we instead use `aruco_ros` package. This section details the eye-in-hand calibration with `aruco_ros` package. First, in addition to the standard pre-requisites you may have downloaded, git clone `aruco_ros` with the link below:
54 |
55 | * [aruco_ros](https://github.com/pal-robotics/aruco_ros/tree/melodic-devel): melodic-devel
56 |
57 | This package supports single or double aruco tags detection. After compiling the package, generate a aruco tag with appropriate size via this [link](https://tn1ck.github.io/aruco-print/) and print it out. Then edit marker configuration:
58 |
59 | ```
60 | cd PATH/TO/YOUR/WORKSPACE/src/aruco_ros/aruco_ros/launch
61 | sudo gedit single.launch
62 | ```
63 |
64 | Here we use single aruco tag with markID as 9, marksize as 0.160m, camera_frame as `camera_color_optical_frame`. The launch file is configured as below:
65 |
66 |
67 |
68 |
69 |
70 | Run the command below to test if printed marker is detected successfully:
71 |
72 | ```
73 | roslaunch realsense2_camera rs_camera.lanuch # launch your camera
74 | roslaunch aruco_ros single.launch # launch your marker detecter
75 | rosrun image_view image_view image:=/aruco_single/result # visualize detected marker
76 | ```
77 | You should be able to see something like:
78 |
79 |
80 |
81 |
82 |
83 | Then, you may continue to launch you robot. In our case, we run:
84 |
85 | ```
86 | roslaunch kortex_driver kortex_driver
87 | ```
88 |
89 | For some one who is using the same robot or camera, drivers can be downloaded and installed as below:
90 |
91 | * [ros_kortex](https://github.com/Kinovarobotics/ros_kortex/tree/melodic-devel): melodic-devel
92 | * [realsense-ros](https://github.com/IntelRealSense/realsense-ros): development
93 |
94 | After launching the camera, aruco and robot successfully, you can load `HandEyeCalibration` in RViz as below:
95 |
96 |
97 |
98 |
99 |
100 | In `HandEyeCalibration`, go to `Context`. Set the parameters as below:
101 |
102 | * Sensor configuration: Eye-in-hand
103 | * Sensor frame: camera_color_optical_frame
104 | * End-effector frame: tool_frame
105 | * Robot base frame: base_link
106 |
107 | Go to `Calibrate`. Now, you may start to move the camera around the take some samples and solve for the transform. **Be sure to include some rotation between each pair of poses, and don’t always rotate around the same axis–at least two rotation axes are needed to uniquely solve for the calibration**. Calibration will be performed automatically, and updated each time a new sample is added. It needs at least 5 sample, but the more the better. You might need to install extra packages for solver if error pops out. Press the `save joint states` and `save camera pose` upon getting the result.
108 |
109 | For the record, our tf tree is as below:
110 |
111 |
112 |
113 |
114 |
115 | ## Eye-to-hand calibration with `aruco_ros` package (for ubuntu 18.04 with default opencv 3.2.0)
116 | Eye-to-hand calibration is almost identical to eye-in-hand calibration. The only difference is where to attach the marker and sensor configuration in Moveit HandEye Calibration.
117 |
118 | First, attach the marker to the end-effector (shown below), and move the end-effector into the camera's view.
119 |
120 |
121 |
122 |
123 |
124 | Then, in `HandEyeCalibration`, go to `Context`. Set the parameters as below:
125 |
126 | * Sensor configuration: Eye-to-hand
127 | * Sensor frame: camera_color_optical_frame
128 | * End-effector frame: tool_frame
129 | * Robot base frame: base_link
130 |
131 | Your RViz should look like this:
132 |
133 |
134 |
135 |
136 |
137 | The rest is the same as eye-in-hand calibration.
138 |
139 | ## Open-loop planar grasp execution
140 | To grasp with 4-DoF (2.5d planar grasp), we modify [GGCNN](https://arxiv.org/abs/1804.05172) to fit our robot and setup (original implementation can be found [here](https://github.com/dougsm/ggcnn)). We choose GGCNN becuase it's light-weight and easy to deploy on any machine.
141 |
142 | To detect 2.5d planar grasp, run the following commands:
143 |
144 | ```
145 | cd PATH/TO/HandEyeCalib-ROS/planar_grasp_ggcnn
146 | roslaunch kortex_driver kortex_driver.launch # start robot arm driver
147 | roslaunch YOUR_HAND_EYE_CALIBRATION.launch # hand-eye calibration from last step, not included here
148 | ```
149 | The commands above first launches KINOVA Gen3/Gen3 Lite dirver and then adds the calibrated hand-eye transform to arm tf tree.
150 |
151 | Run modified ggcnn:
152 | ```
153 | conda activate YOUR_GGCNN_ENV
154 | python vis_ggcnn.py
155 | ```
156 | After running *viz_ggcnn.py*, RGB stream with detected planar grasp will pop out. By default, only the best grasp is shown. Feel free to change `--n-grasps` to any number you like. Aside from detecting grasps, the code will also publish a rostopic `/ggcnn_grasp_pose`. This topic includes grasp center(point), quality, angle, length, and depth value. To execute published grasp on your robot, open another terminal, run:
157 |
158 | ```
159 | python ggcnn_pickNplace_gen3lite_full_arm_movement.py
160 | ```
161 | It subscribes to `/ggcnn_grasp_pose`, converts the received grasp from camera frame to robot frame, and executes the grasp.
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 | ## Open-loop 6-DOF grasp execution
173 | TBA
174 | ## Close-loop planar grasp execution
175 | TBA
176 | ## Close-loop 6-DoF grasp execution
177 | TBA
178 |
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
--------------------------------------------------------------------------------
/images/653839573.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/images/653839573.jpg
--------------------------------------------------------------------------------
/images/Screenshot from 2021-11-19 10-03-58.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/images/Screenshot from 2021-11-19 10-03-58.png
--------------------------------------------------------------------------------
/images/Screenshot from 2021-11-19 15-11-12.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/images/Screenshot from 2021-11-19 15-11-12.png
--------------------------------------------------------------------------------
/images/Screenshot from 2021-11-19 15-13-55.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/images/Screenshot from 2021-11-19 15-13-55.png
--------------------------------------------------------------------------------
/images/Screenshot from 2021-11-19 15-14-46.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/images/Screenshot from 2021-11-19 15-14-46.png
--------------------------------------------------------------------------------
/images/Screenshot from 2021-11-19 16-07-03.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/images/Screenshot from 2021-11-19 16-07-03.png
--------------------------------------------------------------------------------
/images/Screenshot from 2022-04-12 19-11-45.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/images/Screenshot from 2022-04-12 19-11-45.png
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/.idea/.gitignore:
--------------------------------------------------------------------------------
1 | # Default ignored files
2 | /shelf/
3 | /workspace.xml
4 | # Datasource local storage ignored files
5 | /dataSources/
6 | /dataSources.local.xml
7 | # Editor-based HTTP Client requests
8 | /httpRequests/
9 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/.idea/ggcnn.iml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/.idea/inspectionProfiles/Project_Default.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
12 |
13 |
14 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/.idea/inspectionProfiles/profiles_settings.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/.idea/misc.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/.idea/modules.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/.idea/vcs.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/eval_ggcnn.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import logging
3 |
4 | import torch.utils.data
5 |
6 | from models.common import post_process_output
7 | from utils.dataset_processing import evaluation, grasp
8 | from utils.data import get_dataset
9 |
10 | logging.basicConfig(level=logging.INFO)
11 |
12 |
13 | def parse_args():
14 | parser = argparse.ArgumentParser(description='Evaluate GG-CNN')
15 |
16 | # Network
17 | parser.add_argument('--network', type=str, default='./ggcnn_weights_cornell/ggcnn_epoch_23_cornell', help='Path to saved network to evaluate')
18 |
19 | # Dataset & Data & Training
20 | parser.add_argument('--dataset', type=str, help='Dataset Name ("cornell" or "jaquard")')
21 | parser.add_argument('--dataset-path', type=str, help='Path to dataset')
22 | parser.add_argument('--use-depth', type=int, default=1, help='Use Depth image for evaluation (1/0)')
23 | parser.add_argument('--use-rgb', type=int, default=0, help='Use RGB image for evaluation (0/1)')
24 | parser.add_argument('--augment', action='store_true', help='Whether data augmentation should be applied')
25 | parser.add_argument('--split', type=float, default=0.9, help='Fraction of data for training (remainder is validation)')
26 | parser.add_argument('--ds-rotate', type=float, default=0.0,
27 | help='Shift the start point of the dataset to use a different test/train split')
28 | parser.add_argument('--num-workers', type=int, default=8, help='Dataset workers')
29 |
30 | parser.add_argument('--n-grasps', type=int, default=1, help='Number of grasps to consider per image')
31 | parser.add_argument('--iou-eval', action='store_true', help='Compute success based on IoU metric.')
32 | parser.add_argument('--jacquard-output', action='store_true', help='Jacquard-dataset style output')
33 | parser.add_argument('--vis', action='store_true', help='Visualise the network output')
34 |
35 | args = parser.parse_args()
36 |
37 | if args.jacquard_output and args.dataset != 'jacquard':
38 | raise ValueError('--jacquard-output can only be used with the --dataset jacquard option.')
39 | if args.jacquard_output and args.augment:
40 | raise ValueError('--jacquard-output can not be used with data augmentation.')
41 |
42 | return args
43 |
44 |
45 | if __name__ == '__main__':
46 | args = parse_args()
47 |
48 | # Load Network
49 | net = torch.load(args.network)
50 | device = torch.device("cuda:0")
51 |
52 | # Load Dataset
53 | logging.info('Loading {} Dataset...'.format(args.dataset.title()))
54 | Dataset = get_dataset(args.dataset)
55 | test_dataset = Dataset(args.dataset_path, start=args.split, end=1.0, ds_rotate=args.ds_rotate,
56 | random_rotate=args.augment, random_zoom=args.augment,
57 | include_depth=args.use_depth, include_rgb=args.use_rgb)
58 | test_data = torch.utils.data.DataLoader(
59 | test_dataset,
60 | batch_size=1,
61 | shuffle=False,
62 | num_workers=args.num_workers
63 | )
64 | logging.info('Done')
65 |
66 | results = {'correct': 0, 'failed': 0}
67 |
68 | if args.jacquard_output:
69 | jo_fn = args.network + '_jacquard_output.txt'
70 | with open(jo_fn, 'w') as f:
71 | pass
72 |
73 | with torch.no_grad():
74 | for idx, (x, y, didx, rot, zoom) in enumerate(test_data):
75 | logging.info('Processing {}/{}'.format(idx+1, len(test_data)))
76 | xc = x.to(device)
77 | yc = [yi.to(device) for yi in y]
78 | lossd = net.compute_loss(xc, yc)
79 |
80 | q_img, ang_img, width_img = post_process_output(lossd['pred']['pos'], lossd['pred']['cos'],
81 | lossd['pred']['sin'], lossd['pred']['width'])
82 |
83 | if args.iou_eval:
84 | s = evaluation.calculate_iou_match(q_img, ang_img, test_data.dataset.get_gtbb(didx, rot, zoom),
85 | no_grasps=args.n_grasps,
86 | grasp_width=width_img,
87 | )
88 | if s:
89 | results['correct'] += 1
90 | else:
91 | results['failed'] += 1
92 |
93 | if args.jacquard_output:
94 | grasps = grasp.detect_grasps(q_img, ang_img, width_img=width_img, no_grasps=1)
95 | with open(jo_fn, 'a') as f:
96 | for g in grasps:
97 | f.write(test_data.dataset.get_jname(didx) + '\n')
98 | f.write(g.to_jacquard(scale=1024 / 300) + '\n')
99 |
100 | if args.vis:
101 | evaluation.plot_output(test_data.dataset.get_rgb(didx, rot, zoom, normalise=False),
102 | test_data.dataset.get_depth(didx, rot, zoom), q_img,
103 | ang_img, no_grasps=args.n_grasps, grasp_width_img=width_img)
104 |
105 | if args.iou_eval:
106 | logging.info('IOU Results: %d/%d = %f' % (results['correct'],
107 | results['correct'] + results['failed'],
108 | results['correct'] / (results['correct'] + results['failed'])))
109 |
110 | if args.jacquard_output:
111 | logging.info('Jacquard output saved to {}'.format(jo_fn))
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/ggcnn2_weights_cornell/epoch_50_cornell:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/ggcnn2_weights_cornell/epoch_50_cornell
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/ggcnn2_weights_cornell/epoch_50_cornell_statedict.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/ggcnn2_weights_cornell/epoch_50_cornell_statedict.pt
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/ggcnn_pickNplace_gen3lite_full_arm_movement.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | ###
3 | # KINOVA (R) KORTEX (TM)
4 | #
5 | # Copyright (c) 2019 Kinova inc. All rights reserved.
6 | #
7 | # This software may be modified and distributed
8 | # under the terms of the BSD 3-Clause license.
9 | #
10 | # Refer to the LICENSE file for details.
11 | #
12 | ###
13 |
14 | import sys
15 | import rospy
16 | import roslib
17 | import time
18 | import cv2
19 | import numpy as np
20 | from matplotlib import pyplot as plt
21 | from std_msgs.msg import String, Float64MultiArray
22 | from sensor_msgs.msg import Image
23 | from cv_bridge import CvBridge, CvBridgeError
24 | import pdb
25 | import os
26 | import tf
27 | from tf.transformations import quaternion_matrix
28 | # from scipy import misc
29 | import tifffile
30 |
31 |
32 | from kortex_driver.srv import *
33 | from kortex_driver.msg import *
34 |
35 |
36 | class ExampleFullArmMovement:
37 | def __init__(self):
38 | try:
39 | rospy.init_node('example_full_arm_movement_python')
40 |
41 | self.HOME_ACTION_IDENTIFIER = 10000
42 |
43 | # Get node params
44 | self.robot_name = rospy.get_param('~robot_name', "my_gen3_lite")
45 | self.degrees_of_freedom = rospy.get_param("/" + self.robot_name + "/degrees_of_freedom", 7)
46 | self.is_gripper_present = rospy.get_param("/" + self.robot_name + "/is_gripper_present", False)
47 |
48 | rospy.loginfo("Using robot_name " + self.robot_name + " , robot has " + str(self.degrees_of_freedom) + " degrees of freedom and is_gripper_present is " + str(self.is_gripper_present))
49 |
50 | self.bridge = CvBridge()
51 |
52 | # Init the action topic subscriber
53 | self.action_topic_sub = rospy.Subscriber("/" + self.robot_name + "/action_topic", ActionNotification, self.cb_action_topic)
54 | self.last_action_notif_type = None
55 |
56 | # Init ggcnn subscriber
57 | self.grasp_pose_sub = rospy.Subscriber('/ggcnn_grasp_pose', Float64MultiArray, self.cb_grasp_topic)
58 |
59 | # Init the services
60 | clear_faults_full_name = '/' + self.robot_name + '/base/clear_faults'
61 | rospy.wait_for_service(clear_faults_full_name)
62 | self.clear_faults = rospy.ServiceProxy(clear_faults_full_name, Base_ClearFaults)
63 |
64 | read_action_full_name = '/' + self.robot_name + '/base/read_action'
65 | rospy.wait_for_service(read_action_full_name)
66 | self.read_action = rospy.ServiceProxy(read_action_full_name, ReadAction)
67 |
68 | execute_action_full_name = '/' + self.robot_name + '/base/execute_action'
69 | rospy.wait_for_service(execute_action_full_name)
70 | self.execute_action = rospy.ServiceProxy(execute_action_full_name, ExecuteAction)
71 |
72 | play_cartesian_trajectory_full_name = '/' + self.robot_name + '/base/play_cartesian_trajectory'
73 | rospy.wait_for_service(play_cartesian_trajectory_full_name)
74 | self.play_cartesian_trajectory = rospy.ServiceProxy(play_cartesian_trajectory_full_name, PlayCartesianTrajectory)
75 |
76 |
77 | set_cartesian_reference_frame_full_name = '/' + self.robot_name + '/control_config/set_cartesian_reference_frame'
78 | rospy.wait_for_service(set_cartesian_reference_frame_full_name)
79 | self.set_cartesian_reference_frame = rospy.ServiceProxy(set_cartesian_reference_frame_full_name, SetCartesianReferenceFrame)
80 |
81 | send_gripper_command_full_name = '/' + self.robot_name + '/base/send_gripper_command'
82 | rospy.wait_for_service(send_gripper_command_full_name)
83 | self.send_gripper_command = rospy.ServiceProxy(send_gripper_command_full_name, SendGripperCommand)
84 |
85 | activate_publishing_of_action_notification_full_name = '/' + self.robot_name + '/base/activate_publishing_of_action_topic'
86 | rospy.wait_for_service(activate_publishing_of_action_notification_full_name)
87 | self.activate_publishing_of_action_notification = rospy.ServiceProxy(activate_publishing_of_action_notification_full_name, OnNotificationActionTopic)
88 |
89 | get_product_configuration_full_name = '/' + self.robot_name + '/base/get_product_configuration'
90 | rospy.wait_for_service(get_product_configuration_full_name)
91 | self.get_product_configuration = rospy.ServiceProxy(get_product_configuration_full_name, GetProductConfiguration)
92 |
93 | validate_waypoint_list_full_name = '/' + self.robot_name + '/base/validate_waypoint_list'
94 | rospy.wait_for_service(validate_waypoint_list_full_name)
95 | self.validate_waypoint_list = rospy.ServiceProxy(validate_waypoint_list_full_name, ValidateWaypointList)
96 | except:
97 | self.is_init_success = False
98 | else:
99 | self.is_init_success = True
100 |
101 | # define camera intrinsic param
102 | self.fx = 614.1268
103 | self.fy = 614.7515
104 | self.cx = 321.2118
105 | self.cy = 234.6375
106 | self.u = 0
107 | self.v = 0
108 | self.depth = 0
109 | self.angle = 0
110 | self.width = 0
111 | self.quality = 0
112 |
113 | def cb_action_topic(self, notif):
114 | self.last_action_notif_type = notif.action_event
115 |
116 |
117 | def cb_grasp_topic(self, grasp_pose):
118 | self.u = grasp_pose.data[1]
119 | self.v = grasp_pose.data[0]
120 | self.depth = grasp_pose.data[2]
121 | self.angle = grasp_pose.data[3]
122 | self.width = grasp_pose.data[4]
123 | self.quality = grasp_pose.data[5]
124 |
125 | def FillCartesianWaypoint(self, new_x, new_y, new_z, new_theta_x, new_theta_y, new_theta_z, blending_radius):
126 | waypoint = Waypoint()
127 | cartesianWaypoint = CartesianWaypoint()
128 |
129 | cartesianWaypoint.pose.x = new_x
130 | cartesianWaypoint.pose.y = new_y
131 | cartesianWaypoint.pose.z = new_z
132 | cartesianWaypoint.pose.theta_x = new_theta_x
133 | cartesianWaypoint.pose.theta_y = new_theta_y
134 | cartesianWaypoint.pose.theta_z = new_theta_z
135 | cartesianWaypoint.reference_frame = CartesianReferenceFrame.CARTESIAN_REFERENCE_FRAME_BASE
136 | cartesianWaypoint.blending_radius = blending_radius
137 | waypoint.oneof_type_of_waypoint.cartesian_waypoint.append(cartesianWaypoint)
138 |
139 | return waypoint
140 |
141 | def wait_for_action_end_or_abort(self):
142 | while not rospy.is_shutdown():
143 | if (self.last_action_notif_type == ActionEvent.ACTION_END):
144 | rospy.loginfo("Received ACTION_END notification")
145 | return True
146 | elif (self.last_action_notif_type == ActionEvent.ACTION_ABORT):
147 | rospy.loginfo("Received ACTION_ABORT notification")
148 | return False
149 | else:
150 | time.sleep(0.01)
151 |
152 | def example_subscribe_to_a_robot_notification(self):
153 | # Activate the publishing of the ActionNotification
154 | req = OnNotificationActionTopicRequest()
155 | rospy.loginfo("Activating the action notifications...")
156 | try:
157 | self.activate_publishing_of_action_notification(req)
158 | except rospy.ServiceException:
159 | rospy.logerr("Failed to call OnNotificationActionTopic")
160 | return False
161 | else:
162 | rospy.loginfo("Successfully activated the Action Notifications!")
163 |
164 | rospy.sleep(1.0)
165 | return True
166 |
167 | def example_clear_faults(self):
168 | try:
169 | self.clear_faults()
170 | except rospy.ServiceException:
171 | rospy.logerr("Failed to call ClearFaults")
172 | return False
173 | else:
174 | rospy.loginfo("Cleared the faults successfully")
175 | rospy.sleep(2.5)
176 | return True
177 |
178 | def example_home_the_robot(self):
179 | # The Home Action is used to home the robot. It cannot be deleted and is always ID #2:
180 | self.last_action_notif_type = None
181 | req = ReadActionRequest()
182 | req.input.identifier = self.HOME_ACTION_IDENTIFIER
183 | try:
184 | res = self.read_action(req)
185 | except rospy.ServiceException:
186 | rospy.logerr("Failed to call ReadAction")
187 | return False
188 | # Execute the HOME action if we could read it
189 | else:
190 | # What we just read is the input of the ExecuteAction service
191 | req = ExecuteActionRequest()
192 | req.input = res.output
193 | rospy.loginfo("Sending the robot home...")
194 | try:
195 | self.execute_action(req)
196 | return True
197 | except rospy.ServiceException:
198 | rospy.logerr("Failed to call ExecuteAction")
199 | return False
200 | else:
201 | return self.wait_for_action_end_or_abort()
202 |
203 | def example_place_the_object(self):
204 | # The Home Action is used to home the robot. It cannot be deleted and is always ID #2:
205 | self.last_action_notif_type = None
206 | req = ReadActionRequest()
207 | req.input.identifier = 10001
208 | try:
209 | res = self.read_action(req)
210 | except rospy.ServiceException:
211 | rospy.logerr("Failed to call ReadAction")
212 | return False
213 | # Execute the HOME action if we could read it
214 | else:
215 | # What we just read is the input of the ExecuteAction service
216 | req = ExecuteActionRequest()
217 | req.input = res.output
218 | rospy.loginfo("Sending the robot home...")
219 | try:
220 | self.execute_action(req)
221 | return True
222 | except rospy.ServiceException:
223 | rospy.logerr("Failed to call ExecuteAction")
224 | return False
225 | else:
226 | return self.wait_for_action_end_or_abort()
227 |
228 | def example_set_cartesian_reference_frame(self):
229 | self.last_action_notif_type = None
230 | # Prepare the request with the frame we want to set
231 | req = SetCartesianReferenceFrameRequest()
232 | req.input.reference_frame = CartesianReferenceFrame.CARTESIAN_REFERENCE_FRAME_MIXED
233 |
234 | # Call the service
235 | try:
236 | self.set_cartesian_reference_frame()
237 | except rospy.ServiceException:
238 | rospy.logerr("Failed to call SetCartesianReferenceFrame")
239 | return False
240 | else:
241 | rospy.loginfo("Set the cartesian reference frame successfully")
242 |
243 | # Wait a bit
244 | rospy.sleep(0.25)
245 | return True
246 |
247 | def example_send_cartesian_pose(self):
248 | self.last_action_notif_type = None
249 | # Get the actual cartesian pose to increment it
250 | # You can create a subscriber to listen to the base_feedback
251 | # Here we only need the latest message in the topic though
252 | feedback = rospy.wait_for_message("/" + self.robot_name + "/base_feedback", BaseCyclic_Feedback)
253 |
254 | # Possible to execute waypointList via execute_action service or use execute_waypoint_trajectory service directly
255 | req = ExecuteActionRequest()
256 | trajectory = WaypointList()
257 |
258 | trajectory.waypoints.append(
259 | self.FillCartesianWaypoint(
260 | feedback.base.commanded_tool_pose_x,
261 | feedback.base.commanded_tool_pose_y,
262 | feedback.base.commanded_tool_pose_z + 0.10,
263 | feedback.base.commanded_tool_pose_theta_x,
264 | feedback.base.commanded_tool_pose_theta_y,
265 | feedback.base.commanded_tool_pose_theta_z,
266 | 0)
267 | )
268 |
269 | trajectory.duration = 0
270 | trajectory.use_optimal_blending = False
271 |
272 | req.input.oneof_action_parameters.execute_waypoint_list.append(trajectory)
273 |
274 | # Call the service
275 | rospy.loginfo("Sending the robot to the cartesian pose...")
276 | try:
277 | self.execute_action(req)
278 | except rospy.ServiceException:
279 | rospy.logerr("Failed to call ExecuteWaypointTrajectory")
280 | return False
281 | else:
282 | return self.wait_for_action_end_or_abort()
283 |
284 | def example_send_joint_angles(self):
285 | self.last_action_notif_type = None
286 |
287 | req = ExecuteActionRequest()
288 |
289 | trajectory = WaypointList()
290 | waypoint = Waypoint()
291 | angularWaypoint = AngularWaypoint()
292 |
293 | # Angles to send the arm to vertical position (all zeros)
294 | for _ in range(self.degrees_of_freedom):
295 | # angularWaypoint.angles.append(0.0)
296 | angularWaypoint.angles = [0, 360, 90, 270, 270, 270]
297 |
298 |
299 | # Each AngularWaypoint needs a duration and the global duration (from WaypointList) is disregarded.
300 | # If you put something too small (for either global duration or AngularWaypoint duration), the trajectory will be rejected.
301 | angular_duration = 0
302 | angularWaypoint.duration = angular_duration
303 |
304 | # Initialize Waypoint and WaypointList
305 | waypoint.oneof_type_of_waypoint.angular_waypoint.append(angularWaypoint)
306 | trajectory.duration = 0
307 | trajectory.use_optimal_blending = False
308 | trajectory.waypoints.append(waypoint)
309 |
310 | try:
311 | res = self.validate_waypoint_list(trajectory)
312 | except rospy.ServiceException:
313 | rospy.logerr("Failed to call ValidateWaypointList")
314 | return False
315 |
316 | error_number = len(res.output.trajectory_error_report.trajectory_error_elements)
317 | MAX_ANGULAR_DURATION = 30
318 |
319 | while (error_number >= 1 and angular_duration != MAX_ANGULAR_DURATION) :
320 | angular_duration += 1
321 | trajectory.waypoints[0].oneof_type_of_waypoint.angular_waypoint[0].duration = angular_duration
322 |
323 | try:
324 | res = self.validate_waypoint_list(trajectory)
325 | except rospy.ServiceException:
326 | rospy.logerr("Failed to call ValidateWaypointList")
327 | return False
328 |
329 | error_number = len(res.output.trajectory_error_report.trajectory_error_elements)
330 |
331 | if (angular_duration == MAX_ANGULAR_DURATION) :
332 | # It should be possible to reach position within 30s
333 | # WaypointList is invalid (other error than angularWaypoint duration)
334 | rospy.loginfo("WaypointList is invalid")
335 | return False
336 |
337 | req.input.oneof_action_parameters.execute_waypoint_list.append(trajectory)
338 |
339 | # Send the angles
340 | rospy.loginfo("Sending the robot vertical...")
341 | try:
342 | self.execute_action(req)
343 | except rospy.ServiceException:
344 | rospy.logerr("Failed to call ExecuteWaypointjectory")
345 | return False
346 | else:
347 | return self.wait_for_action_end_or_abort()
348 |
349 | def example_send_gripper_command(self, value):
350 | # Initialize the request
351 | # Close the gripper
352 | req = SendGripperCommandRequest()
353 | finger = Finger()
354 | finger.finger_identifier = 0
355 | finger.value = value
356 | req.input.gripper.finger.append(finger)
357 | req.input.mode = GripperMode.GRIPPER_POSITION
358 |
359 | rospy.loginfo("Sending the gripper command...")
360 |
361 | # Call the service
362 | try:
363 | self.send_gripper_command(req)
364 | except rospy.ServiceException:
365 | rospy.logerr("Failed to call SendGripperCommand")
366 | return False
367 | else:
368 | time.sleep(2)
369 | return True
370 |
371 | def example_cartesian_waypoint_action(self):
372 | self.last_action_notif_type = None
373 |
374 | req = ExecuteActionRequest()
375 | trajectory = WaypointList()
376 |
377 | config = self.get_product_configuration()
378 |
379 | if config.output.model == ModelId.MODEL_ID_L31:
380 |
381 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.439, 0.194, 0.448, 90.6, -1.0, 150, 0))
382 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.200, 0.150, 0.400, 90.6, -1.0, 150, 0))
383 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.350, 0.050, 0.300, 90.6, -1.0, 150, 0))
384 | else:
385 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.7, 0.0, 0.5, 90, 0, 90, 0))
386 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.7, 0.0, 0.33, 90, 0, 90, 0.1))
387 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.7, 0.48, 0.33, 90, 0, 90, 0.1))
388 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.61, 0.22, 0.4, 90, 0, 90, 0.1))
389 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.7, 0.48, 0.33, 90, 0, 90, 0.1))
390 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.63, -0.22, 0.45, 90, 0, 90, 0.1))
391 | trajectory.waypoints.append(self.FillCartesianWaypoint(0.65, 0.05, 0.45, 90, 0, 90, 0))
392 |
393 | req.input.oneof_action_parameters.execute_waypoint_list.append(trajectory)
394 |
395 | # Call the service
396 | rospy.loginfo("Executing Kortex action ExecuteWaypointTrajectory...")
397 | try:
398 | self.execute_action(req)
399 | except rospy.ServiceException:
400 | rospy.logerr("Failed to call action ExecuteWaypointTrajectory")
401 | return False
402 | else:
403 | return self.wait_for_action_end_or_abort()
404 |
405 | def set_pose(self, pos, angle):
406 | '''
407 | '''
408 | self.last_action_notif_type = None
409 | # Get the actual cartesian pose to increment it
410 | # You can create a subscriber to listen to the base_feedback
411 | # Here we only need the latest message in the topic though
412 | feedback = rospy.wait_for_message("/" + self.robot_name + "/base_feedback", BaseCyclic_Feedback)
413 |
414 | req = PlayCartesianTrajectoryRequest()
415 |
416 | req.input.target_pose.theta_x = angle[0]
417 | req.input.target_pose.theta_y = angle[1]
418 | req.input.target_pose.theta_z = angle[2]
419 |
420 | req.input.target_pose.x = pos[0]
421 | req.input.target_pose.y = pos[1]
422 | req.input.target_pose.z = pos[2]
423 | pose_speed = CartesianSpeed()
424 | pose_speed.translation = 0.22
425 | pose_speed.orientation = 35
426 |
427 | req.input.constraint.oneof_type.speed.append(pose_speed)
428 |
429 | # Call the service
430 | rospy.loginfo("Sending the robot to the cartesian pose...")
431 | try:
432 | self.play_cartesian_trajectory(req)
433 | except rospy.ServiceException:
434 | rospy.logerr("Failed to call PlayCartesianTrajectory")
435 | return False
436 | else:
437 | return self.wait_for_action_end_or_abort()
438 |
439 | def generate_grasp_pose(self, u, v, z):
440 | """
441 | get 3d grasp points in camera color frame
442 | """
443 | x = (u - self.cx) * z / self.fx
444 | y = (v - self.cy) * z / self.fy
445 |
446 | return [x, y, z]
447 |
448 | def main(self):
449 | # For testing purposes
450 | success = self.is_init_success
451 | try:
452 | rospy.delete_param("/kortex_examples_test_results/full_arm_movement_python")
453 | except:
454 | pass
455 |
456 |
457 | ls = tf.TransformListener()
458 |
459 | success &= self.example_home_the_robot()
460 | success &= self.example_send_gripper_command(0.2)
461 |
462 | grasp_flag = input('Start to grasp? (0) No, (1) Yes')
463 | while(grasp_flag):
464 |
465 |
466 | # rospy.loginfo('Read in grasp parameters.')
467 | # rospy.sleep(5)
468 |
469 | # # check grasp quality
470 | # quality = self.quality
471 | # if quality >= 0.75:
472 | # rospy.loginfo('Grasp quality higher than threshold, continue to grasp.')
473 | # else:
474 | # rospy.loginfo('Grasp quality lower than threshold, stop grasping.')
475 | # break
476 |
477 | # record grasp params
478 | x, y, z = self.generate_grasp_pose(self.u, self.v, self.depth)
479 | angle = self.angle
480 | width = 1.0 - ((self.width - 100)) / 70 * 1.0
481 | success &= self.example_send_gripper_command(width)
482 |
483 | #*******************************************************************************
484 | # Make sure to clear the robot's faults else it won't move if it's already in fault
485 | success &= self.example_clear_faults()
486 | #*******************************************************************************
487 |
488 | #*******************************************************************************
489 | # Activate the action notifications
490 | success &= self.example_subscribe_to_a_robot_notification()
491 | #*******************************************************************************
492 |
493 | #*******************************************************************************
494 | # Move the robot to the Home position with an Action
495 | (T, R) = ls.lookupTransform("base_link", "fake_camera_frame", rospy.Time(0))
496 | mat = quaternion_matrix(R)
497 | mat[0, -1], mat[1, -1], mat[2, -1] = T[0], T[1], T[2]
498 | point = np.array([x, y, z, 1]).T
499 | result = mat.dot(point)
500 | #*******************************************************************************
501 |
502 | #*******************************************************************************
503 | # send robot to grasp point
504 | success &= self.set_pose([result[0], result[1], result[2]+0.1], [0, 180, 90+((angle/np.pi)*180)])
505 | rospy.sleep(1)
506 | success &= self.set_pose([result[0], result[1], result[2]], [0, 180, 90+((angle/np.pi)*180)])
507 | #*******************************************************************************
508 |
509 | #*******************************************************************************
510 | # close gripper and send robot home
511 | success &= self.example_send_gripper_command(0.9)
512 | success &= self.example_home_the_robot()
513 | rospy.sleep(3)
514 | success &= self.example_place_the_object()
515 | rospy.sleep(4)
516 | success &= self.example_send_gripper_command(0.2)
517 | success &= self.example_home_the_robot()
518 | #*******************************************************************************
519 |
520 | #*******************************************************************************
521 | # check grasp flag
522 | grasp_flag = input('Continue to grasp? (0) No, (1) Yes')
523 | #*******************************************************************************
524 |
525 | # # For testing purposes
526 | # rospy.set_param("/kortex_examples_test_results/full_arm_movement_python", success)
527 |
528 | if not success:
529 | rospy.logerr("The example encountered an error.")
530 |
531 |
532 | if __name__ == "__main__":
533 | ex = ExampleFullArmMovement()
534 | ex.main()
535 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/ggcnn_weights_cornell/ggcnn_epoch_23_cornell:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/ggcnn_weights_cornell/ggcnn_epoch_23_cornell
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/ggcnn_weights_cornell/ggcnn_epoch_23_cornell_statedict.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/ggcnn_weights_cornell/ggcnn_epoch_23_cornell_statedict.pt
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/__init__.py:
--------------------------------------------------------------------------------
1 | def get_network(network_name):
2 | network_name = network_name.lower()
3 | if network_name == 'ggcnn':
4 | from .ggcnn import GGCNN
5 | return GGCNN
6 | elif network_name == 'ggcnn2':
7 | from .ggcnn2 import GGCNN2
8 | return GGCNN2
9 | else:
10 | raise NotImplementedError('Network {} is not implemented'.format(network_name))
11 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/__pycache__/__init__.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/models/__pycache__/__init__.cpython-36.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/models/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/__pycache__/common.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/models/__pycache__/common.cpython-36.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/__pycache__/common.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/models/__pycache__/common.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/__pycache__/ggcnn.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/models/__pycache__/ggcnn.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/__pycache__/ggcnn2.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/models/__pycache__/ggcnn2.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/common.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from skimage.filters import gaussian
3 |
4 |
5 | def post_process_output(q_img, cos_img, sin_img, width_img):
6 | """
7 | Post-process the raw output of the GG-CNN, convert to numpy arrays, apply filtering.
8 | :param q_img: Q output of GG-CNN (as torch Tensors)
9 | :param cos_img: cos output of GG-CNN
10 | :param sin_img: sin output of GG-CNN
11 | :param width_img: Width output of GG-CNN
12 | :return: Filtered Q output, Filtered Angle output, Filtered Width output
13 | """
14 | q_img = q_img.cpu().numpy().squeeze()
15 | ang_img = (torch.atan2(sin_img, cos_img) / 2.0).cpu().numpy().squeeze()
16 | width_img = width_img.cpu().numpy().squeeze() * 150.0
17 |
18 | q_img = gaussian(q_img, 2.0, preserve_range=True)
19 | ang_img = gaussian(ang_img, 2.0, preserve_range=True)
20 | width_img = gaussian(width_img, 1.0, preserve_range=True)
21 |
22 | return q_img, ang_img, width_img
23 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/ggcnn.py:
--------------------------------------------------------------------------------
1 | import torch.nn as nn
2 | import torch.nn.functional as F
3 |
4 | filter_sizes = [32, 16, 8, 8, 16, 32]
5 | kernel_sizes = [9, 5, 3, 3, 5, 9]
6 | strides = [3, 2, 2, 2, 2, 3]
7 |
8 |
9 | class GGCNN(nn.Module):
10 | """
11 | GG-CNN
12 | Equivalient to the Keras Model used in the RSS Paper (https://arxiv.org/abs/1804.05172)
13 | """
14 | def __init__(self, input_channels=1):
15 | super().__init__()
16 | self.conv1 = nn.Conv2d(input_channels, filter_sizes[0], kernel_sizes[0], stride=strides[0], padding=3)
17 | self.conv2 = nn.Conv2d(filter_sizes[0], filter_sizes[1], kernel_sizes[1], stride=strides[1], padding=2)
18 | self.conv3 = nn.Conv2d(filter_sizes[1], filter_sizes[2], kernel_sizes[2], stride=strides[2], padding=1)
19 | self.convt1 = nn.ConvTranspose2d(filter_sizes[2], filter_sizes[3], kernel_sizes[3], stride=strides[3], padding=1, output_padding=1)
20 | self.convt2 = nn.ConvTranspose2d(filter_sizes[3], filter_sizes[4], kernel_sizes[4], stride=strides[4], padding=2, output_padding=1)
21 | self.convt3 = nn.ConvTranspose2d(filter_sizes[4], filter_sizes[5], kernel_sizes[5], stride=strides[5], padding=3, output_padding=1)
22 |
23 | self.pos_output = nn.Conv2d(filter_sizes[5], 1, kernel_size=2)
24 | self.cos_output = nn.Conv2d(filter_sizes[5], 1, kernel_size=2)
25 | self.sin_output = nn.Conv2d(filter_sizes[5], 1, kernel_size=2)
26 | self.width_output = nn.Conv2d(filter_sizes[5], 1, kernel_size=2)
27 |
28 | for m in self.modules():
29 | if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
30 | nn.init.xavier_uniform_(m.weight, gain=1)
31 |
32 | def forward(self, x):
33 | x = F.relu(self.conv1(x))
34 | x = F.relu(self.conv2(x))
35 | x = F.relu(self.conv3(x))
36 | x = F.relu(self.convt1(x))
37 | x = F.relu(self.convt2(x))
38 | x = F.relu(self.convt3(x))
39 |
40 | pos_output = self.pos_output(x)
41 | cos_output = self.cos_output(x)
42 | sin_output = self.sin_output(x)
43 | width_output = self.width_output(x)
44 |
45 | return pos_output, cos_output, sin_output, width_output
46 |
47 | def compute_loss(self, xc, yc):
48 | y_pos, y_cos, y_sin, y_width = yc
49 | pos_pred, cos_pred, sin_pred, width_pred = self(xc)
50 |
51 | p_loss = F.mse_loss(pos_pred, y_pos)
52 | cos_loss = F.mse_loss(cos_pred, y_cos)
53 | sin_loss = F.mse_loss(sin_pred, y_sin)
54 | width_loss = F.mse_loss(width_pred, y_width)
55 |
56 | return {
57 | 'loss': p_loss + cos_loss + sin_loss + width_loss,
58 | 'losses': {
59 | 'p_loss': p_loss,
60 | 'cos_loss': cos_loss,
61 | 'sin_loss': sin_loss,
62 | 'width_loss': width_loss
63 | },
64 | 'pred': {
65 | 'pos': pos_pred,
66 | 'cos': cos_pred,
67 | 'sin': sin_pred,
68 | 'width': width_pred
69 | }
70 | }
71 |
72 | def compute_grasp(self, xc):
73 | # y_pos, y_cos, y_sin, y_width = yc
74 | pos_pred, cos_pred, sin_pred, width_pred = self(xc)
75 |
76 | # p_loss = F.mse_loss(pos_pred, y_pos)
77 | # cos_loss = F.mse_loss(cos_pred, y_cos)
78 | # sin_loss = F.mse_loss(sin_pred, y_sin)
79 | # width_loss = F.mse_loss(width_pred, y_width)
80 |
81 | return {
82 |
83 | 'pos': pos_pred,
84 | 'cos': cos_pred,
85 | 'sin': sin_pred,
86 | 'width': width_pred
87 | }
88 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/models/ggcnn2.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 |
6 | class GGCNN2(nn.Module):
7 | def __init__(self, input_channels=1, filter_sizes=None, l3_k_size=5, dilations=None):
8 | super().__init__()
9 |
10 | if filter_sizes is None:
11 | filter_sizes = [16, # First set of convs
12 | 16, # Second set of convs
13 | 32, # Dilated convs
14 | 16] # Transpose Convs
15 |
16 | if dilations is None:
17 | dilations = [2, 4]
18 |
19 | self.features = nn.Sequential(
20 | # 4 conv layers.
21 | nn.Conv2d(input_channels, filter_sizes[0], kernel_size=11, stride=1, padding=5, bias=True),
22 | nn.ReLU(inplace=True),
23 | nn.Conv2d(filter_sizes[0], filter_sizes[0], kernel_size=5, stride=1, padding=2, bias=True),
24 | nn.ReLU(inplace=True),
25 | nn.MaxPool2d(kernel_size=2, stride=2),
26 |
27 | nn.Conv2d(filter_sizes[0], filter_sizes[1], kernel_size=5, stride=1, padding=2, bias=True),
28 | nn.ReLU(inplace=True),
29 | nn.Conv2d(filter_sizes[1], filter_sizes[1], kernel_size=5, stride=1, padding=2, bias=True),
30 | nn.ReLU(inplace=True),
31 | nn.MaxPool2d(kernel_size=2, stride=2),
32 |
33 | # Dilated convolutions.
34 | nn.Conv2d(filter_sizes[1], filter_sizes[2], kernel_size=l3_k_size, dilation=dilations[0], stride=1, padding=(l3_k_size//2 * dilations[0]), bias=True),
35 | nn.ReLU(inplace=True),
36 | nn.Conv2d(filter_sizes[2], filter_sizes[2], kernel_size=l3_k_size, dilation=dilations[1], stride=1, padding=(l3_k_size//2 * dilations[1]), bias=True),
37 | nn.ReLU(inplace=True),
38 |
39 | # Output layers
40 | nn.UpsamplingBilinear2d(scale_factor=2),
41 | nn.Conv2d(filter_sizes[2], filter_sizes[3], 3, padding=1),
42 | nn.ReLU(inplace=True),
43 | nn.UpsamplingBilinear2d(scale_factor=2),
44 | nn.Conv2d(filter_sizes[3], filter_sizes[3], 3, padding=1),
45 | nn.ReLU(inplace=True),
46 | )
47 |
48 | self.pos_output = nn.Conv2d(filter_sizes[3], 1, kernel_size=1)
49 | self.cos_output = nn.Conv2d(filter_sizes[3], 1, kernel_size=1)
50 | self.sin_output = nn.Conv2d(filter_sizes[3], 1, kernel_size=1)
51 | self.width_output = nn.Conv2d(filter_sizes[3], 1, kernel_size=1)
52 |
53 | for m in self.modules():
54 | if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
55 | nn.init.xavier_uniform_(m.weight, gain=1)
56 |
57 | def forward(self, x):
58 | x = self.features(x)
59 |
60 | pos_output = self.pos_output(x)
61 | cos_output = self.cos_output(x)
62 | sin_output = self.sin_output(x)
63 | width_output = self.width_output(x)
64 |
65 | return pos_output, cos_output, sin_output, width_output
66 |
67 | def compute_loss(self, xc, yc):
68 | y_pos, y_cos, y_sin, y_width = yc
69 | pos_pred, cos_pred, sin_pred, width_pred = self(xc)
70 |
71 | p_loss = F.mse_loss(pos_pred, y_pos)
72 | cos_loss = F.mse_loss(cos_pred, y_cos)
73 | sin_loss = F.mse_loss(sin_pred, y_sin)
74 | width_loss = F.mse_loss(width_pred, y_width)
75 |
76 | return {
77 | 'loss': p_loss + cos_loss + sin_loss + width_loss,
78 | 'losses': {
79 | 'p_loss': p_loss,
80 | 'cos_loss': cos_loss,
81 | 'sin_loss': sin_loss,
82 | 'width_loss': width_loss
83 | },
84 | 'pred': {
85 | 'pos': pos_pred,
86 | 'cos': cos_pred,
87 | 'sin': sin_pred,
88 | 'width': width_pred
89 | }
90 | }
91 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy
2 | opencv-python
3 | matplotlib
4 | scikit-image
5 | imageio
6 | torch
7 | torchvision
8 | torchsummary
9 | tensorboardX
10 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/0.png
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/0.tiff:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/0.tiff
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/1.png
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/1.tiff:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/1.tiff
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/2.png
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/2.tiff:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/2.tiff
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/3.png
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/3.tiff:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/3.tiff
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/4.png
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/samples/4.tiff:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/samples/4.tiff
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/train_ggcnn.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import os
3 | import sys
4 | import argparse
5 | import logging
6 |
7 | import cv2
8 |
9 | import torch
10 | import torch.utils.data
11 | import torch.optim as optim
12 |
13 | from torchsummary import summary
14 |
15 | import tensorboardX
16 |
17 | from utils.visualisation.gridshow import gridshow
18 |
19 | from utils.dataset_processing import evaluation
20 | from utils.data import get_dataset
21 | from models import get_network
22 | from models.common import post_process_output
23 |
24 | logging.basicConfig(level=logging.INFO)
25 |
26 | def parse_args():
27 | parser = argparse.ArgumentParser(description='Train GG-CNN')
28 |
29 | # Network
30 | parser.add_argument('--network', type=str, default='ggcnn', help='Network Name in .models')
31 |
32 | # Dataset & Data & Training
33 | parser.add_argument('--dataset', type=str, help='Dataset Name ("cornell" or "jaquard")')
34 | parser.add_argument('--dataset-path', type=str, help='Path to dataset')
35 | parser.add_argument('--use-depth', type=int, default=1, help='Use Depth image for training (1/0)')
36 | parser.add_argument('--use-rgb', type=int, default=0, help='Use RGB image for training (0/1)')
37 | parser.add_argument('--split', type=float, default=0.9, help='Fraction of data for training (remainder is validation)')
38 | parser.add_argument('--ds-rotate', type=float, default=0.0,
39 | help='Shift the start point of the dataset to use a different test/train split for cross validation.')
40 | parser.add_argument('--num-workers', type=int, default=8, help='Dataset workers')
41 |
42 | parser.add_argument('--batch-size', type=int, default=8, help='Batch size')
43 | parser.add_argument('--epochs', type=int, default=50, help='Training epochs')
44 | parser.add_argument('--batches-per-epoch', type=int, default=1000, help='Batches per Epoch')
45 | parser.add_argument('--val-batches', type=int, default=250, help='Validation Batches')
46 |
47 | # Logging etc.
48 | parser.add_argument('--description', type=str, default='', help='Training description')
49 | parser.add_argument('--outdir', type=str, default='output/models/', help='Training Output Directory')
50 | parser.add_argument('--logdir', type=str, default='tensorboard/', help='Log directory')
51 | parser.add_argument('--vis', action='store_true', help='Visualise the training process')
52 |
53 | args = parser.parse_args()
54 | return args
55 |
56 |
57 | def validate(net, device, val_data, batches_per_epoch):
58 | """
59 | Run validation.
60 | :param net: Network
61 | :param device: Torch device
62 | :param val_data: Validation Dataset
63 | :param batches_per_epoch: Number of batches to run
64 | :return: Successes, Failures and Losses
65 | """
66 | net.eval()
67 |
68 | results = {
69 | 'correct': 0,
70 | 'failed': 0,
71 | 'loss': 0,
72 | 'losses': {
73 |
74 | }
75 | }
76 |
77 | ld = len(val_data)
78 |
79 | with torch.no_grad():
80 | batch_idx = 0
81 | while batch_idx < batches_per_epoch:
82 | for x, y, didx, rot, zoom_factor in val_data:
83 | batch_idx += 1
84 | if batches_per_epoch is not None and batch_idx >= batches_per_epoch:
85 | break
86 |
87 | xc = x.to(device)
88 | yc = [yy.to(device) for yy in y]
89 | lossd = net.compute_loss(xc, yc)
90 |
91 | loss = lossd['loss']
92 |
93 | results['loss'] += loss.item()/ld
94 | for ln, l in lossd['losses'].items():
95 | if ln not in results['losses']:
96 | results['losses'][ln] = 0
97 | results['losses'][ln] += l.item()/ld
98 |
99 | q_out, ang_out, w_out = post_process_output(lossd['pred']['pos'], lossd['pred']['cos'],
100 | lossd['pred']['sin'], lossd['pred']['width'])
101 |
102 | s = evaluation.calculate_iou_match(q_out, ang_out,
103 | val_data.dataset.get_gtbb(didx, rot, zoom_factor),
104 | no_grasps=1,
105 | grasp_width=w_out,
106 | )
107 |
108 | if s:
109 | results['correct'] += 1
110 | else:
111 | results['failed'] += 1
112 |
113 | return results
114 |
115 |
116 | def train(epoch, net, device, train_data, optimizer, batches_per_epoch, vis=False):
117 | """
118 | Run one training epoch
119 | :param epoch: Current epoch
120 | :param net: Network
121 | :param device: Torch device
122 | :param train_data: Training Dataset
123 | :param optimizer: Optimizer
124 | :param batches_per_epoch: Data batches to train on
125 | :param vis: Visualise training progress
126 | :return: Average Losses for Epoch
127 | """
128 | results = {
129 | 'loss': 0,
130 | 'losses': {
131 | }
132 | }
133 |
134 | net.train()
135 |
136 | batch_idx = 0
137 | # Use batches per epoch to make training on different sized datasets (cornell/jacquard) more equivalent.
138 | while batch_idx < batches_per_epoch:
139 | for x, y, _, _, _ in train_data:
140 | batch_idx += 1
141 | if batch_idx >= batches_per_epoch:
142 | break
143 |
144 | xc = x.to(device)
145 | yc = [yy.to(device) for yy in y]
146 | lossd = net.compute_loss(xc, yc)
147 |
148 | loss = lossd['loss']
149 |
150 | if batch_idx % 100 == 0:
151 | logging.info('Epoch: {}, Batch: {}, Loss: {:0.4f}'.format(epoch, batch_idx, loss.item()))
152 |
153 | results['loss'] += loss.item()
154 | for ln, l in lossd['losses'].items():
155 | if ln not in results['losses']:
156 | results['losses'][ln] = 0
157 | results['losses'][ln] += l.item()
158 |
159 | optimizer.zero_grad()
160 | loss.backward()
161 | optimizer.step()
162 |
163 | # Display the images
164 | if vis:
165 | imgs = []
166 | n_img = min(4, x.shape[0])
167 | for idx in range(n_img):
168 | imgs.extend([x[idx,].numpy().squeeze()] + [yi[idx,].numpy().squeeze() for yi in y] + [
169 | x[idx,].numpy().squeeze()] + [pc[idx,].detach().cpu().numpy().squeeze() for pc in lossd['pred'].values()])
170 | gridshow('Display', imgs,
171 | [(xc.min().item(), xc.max().item()), (0.0, 1.0), (0.0, 1.0), (-1.0, 1.0), (0.0, 1.0)] * 2 * n_img,
172 | [cv2.COLORMAP_BONE] * 10 * n_img, 10)
173 | cv2.waitKey(2)
174 |
175 | results['loss'] /= batch_idx
176 | for l in results['losses']:
177 | results['losses'][l] /= batch_idx
178 |
179 | return results
180 |
181 |
182 | def run():
183 | args = parse_args()
184 |
185 | # Vis window
186 | if args.vis:
187 | cv2.namedWindow('Display', cv2.WINDOW_NORMAL)
188 |
189 | # Set-up output directories
190 | dt = datetime.datetime.now().strftime('%y%m%d_%H%M')
191 | net_desc = '{}_{}'.format(dt, '_'.join(args.description.split()))
192 |
193 | save_folder = os.path.join(args.outdir, net_desc)
194 | if not os.path.exists(save_folder):
195 | os.makedirs(save_folder)
196 | tb = tensorboardX.SummaryWriter(os.path.join(args.logdir, net_desc))
197 |
198 | # Load Dataset
199 | logging.info('Loading {} Dataset...'.format(args.dataset.title()))
200 | Dataset = get_dataset(args.dataset)
201 |
202 | train_dataset = Dataset(args.dataset_path, start=0.0, end=args.split, ds_rotate=args.ds_rotate,
203 | random_rotate=True, random_zoom=True,
204 | include_depth=args.use_depth, include_rgb=args.use_rgb)
205 | train_data = torch.utils.data.DataLoader(
206 | train_dataset,
207 | batch_size=args.batch_size,
208 | shuffle=True,
209 | num_workers=args.num_workers
210 | )
211 | val_dataset = Dataset(args.dataset_path, start=args.split, end=1.0, ds_rotate=args.ds_rotate,
212 | random_rotate=True, random_zoom=True,
213 | include_depth=args.use_depth, include_rgb=args.use_rgb)
214 | val_data = torch.utils.data.DataLoader(
215 | val_dataset,
216 | batch_size=1,
217 | shuffle=False,
218 | num_workers=args.num_workers
219 | )
220 | logging.info('Done')
221 |
222 | # Load the network
223 | logging.info('Loading Network...')
224 | input_channels = 1*args.use_depth + 3*args.use_rgb
225 | ggcnn = get_network(args.network)
226 |
227 | net = ggcnn(input_channels=input_channels)
228 | device = torch.device("cuda:0")
229 | net = net.to(device)
230 | optimizer = optim.Adam(net.parameters())
231 | logging.info('Done')
232 |
233 | # Print model architecture.
234 | summary(net, (input_channels, 300, 300))
235 | f = open(os.path.join(save_folder, 'arch.txt'), 'w')
236 | sys.stdout = f
237 | summary(net, (input_channels, 300, 300))
238 | sys.stdout = sys.__stdout__
239 | f.close()
240 |
241 | best_iou = 0.0
242 | for epoch in range(args.epochs):
243 | logging.info('Beginning Epoch {:02d}'.format(epoch))
244 | train_results = train(epoch, net, device, train_data, optimizer, args.batches_per_epoch, vis=args.vis)
245 |
246 | # Log training losses to tensorboard
247 | tb.add_scalar('loss/train_loss', train_results['loss'], epoch)
248 | for n, l in train_results['losses'].items():
249 | tb.add_scalar('train_loss/' + n, l, epoch)
250 |
251 | # Run Validation
252 | logging.info('Validating...')
253 | test_results = validate(net, device, val_data, args.val_batches)
254 | logging.info('%d/%d = %f' % (test_results['correct'], test_results['correct'] + test_results['failed'],
255 | test_results['correct']/(test_results['correct']+test_results['failed'])))
256 |
257 | # Log validation results to tensorbaord
258 | tb.add_scalar('loss/IOU', test_results['correct'] / (test_results['correct'] + test_results['failed']), epoch)
259 | tb.add_scalar('loss/val_loss', test_results['loss'], epoch)
260 | for n, l in test_results['losses'].items():
261 | tb.add_scalar('val_loss/' + n, l, epoch)
262 |
263 | # Save best performing network
264 | iou = test_results['correct'] / (test_results['correct'] + test_results['failed'])
265 | if iou > best_iou or epoch == 0 or (epoch % 10) == 0:
266 | torch.save(net, os.path.join(save_folder, 'epoch_%02d_iou_%0.2f' % (epoch, iou)))
267 | torch.save(net.state_dict(), os.path.join(save_folder, 'epoch_%02d_iou_%0.2f_statedict.pt' % (epoch, iou)))
268 | best_iou = iou
269 |
270 |
271 | if __name__ == '__main__':
272 | run()
273 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/__init__.py
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/data/__init__.py:
--------------------------------------------------------------------------------
1 | def get_dataset(dataset_name):
2 | if dataset_name == 'cornell':
3 | from .cornell_data import CornellDataset
4 | return CornellDataset
5 | elif dataset_name == 'jacquard':
6 | from .jacquard_data import JacquardDataset
7 | return JacquardDataset
8 | else:
9 | raise NotImplementedError('Dataset Type {} is Not implemented'.format(dataset_name))
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/data/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/data/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/data/__pycache__/grasp_data.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/data/__pycache__/grasp_data.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/data/__pycache__/jacquard_data.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/data/__pycache__/jacquard_data.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/data/cornell_data.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glob
3 |
4 | from .grasp_data import GraspDatasetBase
5 | from utils.dataset_processing import grasp, image
6 |
7 |
8 | class CornellDataset(GraspDatasetBase):
9 | """
10 | Dataset wrapper for the Cornell dataset.
11 | """
12 | def __init__(self, file_path, start=0.0, end=1.0, ds_rotate=0, **kwargs):
13 | """
14 | :param file_path: Cornell Dataset directory.
15 | :param start: If splitting the dataset, start at this fraction [0,1]
16 | :param end: If splitting the dataset, finish at this fraction
17 | :param ds_rotate: If splitting the dataset, rotate the list of items by this fraction first
18 | :param kwargs: kwargs for GraspDatasetBase
19 | """
20 | super(CornellDataset, self).__init__(**kwargs)
21 |
22 | graspf = glob.glob(os.path.join(file_path, '*', 'pcd*cpos.txt'))
23 | graspf.sort()
24 | l = len(graspf)
25 | if l == 0:
26 | raise FileNotFoundError('No dataset files found. Check path: {}'.format(file_path))
27 |
28 | if ds_rotate:
29 | graspf = graspf[int(l*ds_rotate):] + graspf[:int(l*ds_rotate)]
30 |
31 | depthf = [f.replace('cpos.txt', 'd.tiff') for f in graspf]
32 | rgbf = [f.replace('d.tiff', 'r.png') for f in depthf]
33 |
34 | self.grasp_files = graspf[int(l*start):int(l*end)]
35 | self.depth_files = depthf[int(l*start):int(l*end)]
36 | self.rgb_files = rgbf[int(l*start):int(l*end)]
37 |
38 | def _get_crop_attrs(self, idx):
39 | gtbbs = grasp.GraspRectangles.load_from_cornell_file(self.grasp_files[idx])
40 | center = gtbbs.center
41 | left = max(0, min(center[1] - self.output_size // 2, 640 - self.output_size))
42 | top = max(0, min(center[0] - self.output_size // 2, 480 - self.output_size))
43 | return center, left, top
44 |
45 | def get_gtbb(self, idx, rot=0, zoom=1.0):
46 | gtbbs = grasp.GraspRectangles.load_from_cornell_file(self.grasp_files[idx])
47 | center, left, top = self._get_crop_attrs(idx)
48 | gtbbs.rotate(rot, center)
49 | gtbbs.offset((-top, -left))
50 | gtbbs.zoom(zoom, (self.output_size//2, self.output_size//2))
51 | return gtbbs
52 |
53 | def get_depth(self, idx, rot=0, zoom=1.0):
54 | depth_img = image.DepthImage.from_tiff(self.depth_files[idx])
55 | center, left, top = self._get_crop_attrs(idx)
56 | depth_img.rotate(rot, center)
57 | depth_img.crop((top, left), (min(480, top + self.output_size), min(640, left + self.output_size)))
58 | depth_img.normalise()
59 | depth_img.zoom(zoom)
60 | depth_img.resize((self.output_size, self.output_size))
61 | return depth_img.img
62 |
63 | def get_rgb(self, idx, rot=0, zoom=1.0, normalise=True):
64 | rgb_img = image.Image.from_file(self.rgb_files[idx])
65 | center, left, top = self._get_crop_attrs(idx)
66 | rgb_img.rotate(rot, center)
67 | rgb_img.crop((top, left), (min(480, top + self.output_size), min(640, left + self.output_size)))
68 | rgb_img.zoom(zoom)
69 | rgb_img.resize((self.output_size, self.output_size))
70 | if normalise:
71 | rgb_img.normalise()
72 | rgb_img.img = rgb_img.img.transpose((2, 0, 1))
73 | return rgb_img.img
74 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/data/grasp_data.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | import torch
4 | import torch.utils.data
5 |
6 | import random
7 |
8 |
9 | class GraspDatasetBase(torch.utils.data.Dataset):
10 | """
11 | An abstract dataset for training GG-CNNs in a common format.
12 | """
13 | def __init__(self, output_size=300, include_depth=True, include_rgb=False, random_rotate=False,
14 | random_zoom=False, input_only=False):
15 | """
16 | :param output_size: Image output size in pixels (square)
17 | :param include_depth: Whether depth image is included
18 | :param include_rgb: Whether RGB image is included
19 | :param random_rotate: Whether random rotations are applied
20 | :param random_zoom: Whether random zooms are applied
21 | :param input_only: Whether to return only the network input (no labels)
22 | """
23 | self.output_size = output_size
24 | self.random_rotate = random_rotate
25 | self.random_zoom = random_zoom
26 | self.input_only = input_only
27 | self.include_depth = include_depth
28 | self.include_rgb = include_rgb
29 |
30 | self.grasp_files = []
31 |
32 | if include_depth is False and include_rgb is False:
33 | raise ValueError('At least one of Depth or RGB must be specified.')
34 |
35 | @staticmethod
36 | def numpy_to_torch(s):
37 | if len(s.shape) == 2:
38 | return torch.from_numpy(np.expand_dims(s, 0).astype(np.float32))
39 | else:
40 | return torch.from_numpy(s.astype(np.float32))
41 |
42 | def get_gtbb(self, idx, rot=0, zoom=1.0):
43 | raise NotImplementedError()
44 |
45 | def get_depth(self, idx, rot=0, zoom=1.0):
46 | raise NotImplementedError()
47 |
48 | def get_rgb(self, idx, rot=0, zoom=1.0):
49 | raise NotImplementedError()
50 |
51 | def __getitem__(self, idx):
52 | if self.random_rotate:
53 | rotations = [0, np.pi/2, 2*np.pi/2, 3*np.pi/2]
54 | rot = random.choice(rotations)
55 | else:
56 | rot = 0.0
57 |
58 | if self.random_zoom:
59 | zoom_factor = np.random.uniform(0.5, 1.0)
60 | else:
61 | zoom_factor = 1.0
62 |
63 | # Load the depth image
64 | if self.include_depth:
65 | depth_img = self.get_depth(idx, rot, zoom_factor)
66 |
67 | # Load the RGB image
68 | if self.include_rgb:
69 | rgb_img = self.get_rgb(idx, rot, zoom_factor)
70 |
71 | # Load the grasps
72 | bbs = self.get_gtbb(idx, rot, zoom_factor)
73 |
74 | pos_img, ang_img, width_img = bbs.draw((self.output_size, self.output_size))
75 | width_img = np.clip(width_img, 0.0, 150.0)/150.0
76 |
77 | if self.include_depth and self.include_rgb:
78 | x = self.numpy_to_torch(
79 | np.concatenate(
80 | (np.expand_dims(depth_img, 0),
81 | rgb_img),
82 | 0
83 | )
84 | )
85 | elif self.include_depth:
86 | x = self.numpy_to_torch(depth_img)
87 | elif self.include_rgb:
88 | x = self.numpy_to_torch(rgb_img)
89 |
90 | pos = self.numpy_to_torch(pos_img)
91 | cos = self.numpy_to_torch(np.cos(2*ang_img))
92 | sin = self.numpy_to_torch(np.sin(2*ang_img))
93 | width = self.numpy_to_torch(width_img)
94 |
95 | return x, (pos, cos, sin, width), idx, rot, zoom_factor
96 |
97 | def __len__(self):
98 | return len(self.grasp_files)
99 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/data/jacquard_data.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glob
3 |
4 | from .grasp_data import GraspDatasetBase
5 | from utils.dataset_processing import grasp, image
6 |
7 |
8 | class JacquardDataset(GraspDatasetBase):
9 | """
10 | Dataset wrapper for the Jacquard dataset.
11 | """
12 | def __init__(self, file_path, start=0.0, end=1.0, ds_rotate=0, **kwargs):
13 | """
14 | :param file_path: Jacquard Dataset directory.
15 | :param start: If splitting the dataset, start at this fraction [0,1]
16 | :param end: If splitting the dataset, finish at this fraction
17 | :param ds_rotate: If splitting the dataset, rotate the list of items by this fraction first
18 | :param kwargs: kwargs for GraspDatasetBase
19 | """
20 | super(JacquardDataset, self).__init__(**kwargs)
21 |
22 | graspf = glob.glob(os.path.join(file_path, '*', '*_grasps.txt'))
23 | graspf.sort()
24 | l = len(graspf)
25 |
26 | if l == 0:
27 | raise FileNotFoundError('No dataset files found. Check path: {}'.format(file_path))
28 |
29 | if ds_rotate:
30 | graspf = graspf[int(l*ds_rotate):] + graspf[:int(l*ds_rotate)]
31 |
32 | depthf = [f.replace('grasps.txt', 'perfect_depth.tiff') for f in graspf]
33 | rgbf = [f.replace('perfect_depth.tiff', 'RGB.png') for f in depthf]
34 |
35 | self.grasp_files = graspf[int(l*start):int(l*end)]
36 | self.depth_files = depthf[int(l*start):int(l*end)]
37 | self.rgb_files = rgbf[int(l*start):int(l*end)]
38 |
39 | def get_gtbb(self, idx, rot=0, zoom=1.0):
40 | gtbbs = grasp.GraspRectangles.load_from_jacquard_file(self.grasp_files[idx], scale=self.output_size / 1024.0)
41 | c = self.output_size//2
42 | gtbbs.rotate(rot, (c, c))
43 | gtbbs.zoom(zoom, (c, c))
44 | return gtbbs
45 |
46 | def get_depth(self, idx, rot=0, zoom=1.0):
47 | depth_img = image.DepthImage.from_tiff(self.depth_files[idx])
48 | depth_img.rotate(rot)
49 | depth_img.normalise()
50 | depth_img.zoom(zoom)
51 | depth_img.resize((self.output_size, self.output_size))
52 | return depth_img.img
53 |
54 | def get_rgb(self, idx, rot=0, zoom=1.0, normalise=True):
55 | rgb_img = image.Image.from_file(self.rgb_files[idx])
56 | rgb_img.rotate(rot)
57 | rgb_img.zoom(zoom)
58 | rgb_img.resize((self.output_size, self.output_size))
59 | if normalise:
60 | rgb_img.normalise()
61 | rgb_img.img = rgb_img.img.transpose((2, 0, 1))
62 | return rgb_img.img
63 |
64 | def get_jname(self, idx):
65 | return '_'.join(self.grasp_files[idx].split(os.sep)[-1].split('_')[:-1])
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/dataset_processing/__init__.py
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/dataset_processing/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/__pycache__/evaluation.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/dataset_processing/__pycache__/evaluation.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/__pycache__/grasp.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/dataset_processing/__pycache__/grasp.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/__pycache__/image.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/dataset_processing/__pycache__/image.cpython-37.pyc
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/evaluation.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import matplotlib.pyplot as plt
3 | import cv2
4 | import json
5 | import os
6 |
7 | from .grasp import GraspRectangles, detect_grasps
8 |
9 |
10 | def plot_output(rgb_img, depth_img, grasp_q_img, grasp_angle_img, no_grasps=1, grasp_width_img=None, raw_size=(640, 480), grasp_mask=None, rel_file=None):
11 | """
12 | Plot the output of a GG-CNN
13 | :param rgb_img: RGB Image
14 | :param depth_img: Depth Image
15 | :param grasp_q_img: Q output of GG-CNN
16 | :param grasp_angle_img: Angle output of GG-CNN
17 | :param no_grasps: Maximum number of grasps to plot
18 | :param grasp_width_img: (optional) Width output of GG-CNN
19 | :return:
20 | """
21 | # resize
22 | INTER_TYPE = cv2.INTER_CUBIC
23 | width_factor = (raw_size[0]/grasp_q_img.shape[0] + raw_size[1]/grasp_q_img.shape[1]) / 2
24 | grasp_q_img = cv2.resize(grasp_q_img, dsize=raw_size, interpolation=INTER_TYPE)
25 | grasp_angle_img = cv2.resize(grasp_angle_img, dsize=raw_size, interpolation=INTER_TYPE)
26 | grasp_width_img = cv2.resize(grasp_width_img, dsize=raw_size, interpolation=INTER_TYPE) * width_factor
27 | rgb_img = cv2.resize(rgb_img, dsize=raw_size, interpolation=INTER_TYPE)
28 | depth_img = cv2.resize(depth_img, dsize=raw_size, interpolation=INTER_TYPE)
29 |
30 | # TODO: save grasps
31 | if grasp_mask is not None:
32 | mask = np.load(grasp_mask)
33 | grasp_q_img_mask = grasp_q_img * mask
34 | gs = detect_grasps(grasp_q_img_mask, grasp_angle_img, width_img=grasp_width_img, no_grasps=no_grasps)
35 | else:
36 | gs = detect_grasps(grasp_q_img, grasp_angle_img, width_img=grasp_width_img, no_grasps=no_grasps)
37 |
38 | id_rel = {1: 'scoop', 2: 'pour', 3: 'cut', 4: 'contain', 5: 'dump', 6: 'wipe'}
39 |
40 | if rel_file is not None:
41 | f = open(rel_file, 'r')
42 | c = f.read()
43 | rel = json.loads(c)
44 | rel_id, sub_ponit, obj_point = rel['rel_id'], rel['sub_point'], rel['obj_point']
45 | distance = (np.array(obj_point) - np.array(sub_ponit))/3
46 | start_point = distance + np.array(sub_ponit)
47 | f.close()
48 |
49 | fig = plt.figure(figsize=(10, 10))
50 | ax = fig.add_subplot(2, 2, 1)
51 | ax.imshow(rgb_img)
52 | for g in gs:
53 | g.plot(ax)
54 | # print("center: " + str(g.center) + ", width: %f, length: %f, angle: %f" %(g.width, g.length, g.angle))
55 |
56 |
57 | if rel_file is not None:
58 | ax.set_title('RGB, Action: ' + id_rel[rel_id])
59 | plt.arrow(start_point[0], start_point[1], distance[0], distance[1], head_width=10, color='g')
60 | plt.plot(obj_point[0], obj_point[1], 'o', color='b')
61 | else:
62 | ax.set_title('RGB')
63 | ax.axis('off')
64 |
65 | ax = fig.add_subplot(2, 2, 2)
66 | ax.imshow(depth_img, cmap='gray')
67 | for g in gs:
68 | g.plot(ax)
69 | if rel_file is not None:
70 | ax.set_title('Depth, Action: ' + id_rel[rel_id])
71 | plt.arrow(start_point[0], start_point[1], distance[0], distance[1], head_width=10, color='g')
72 | plt.plot(obj_point[0], obj_point[1], 'o', color='b')
73 | else:
74 | ax.set_title('Depth')
75 | ax.axis('off')
76 |
77 | ax = fig.add_subplot(2, 2, 3)
78 | plot = ax.imshow(grasp_q_img, cmap='jet', vmin=0, vmax=1)
79 | ax.set_title('Q')
80 | ax.axis('off')
81 | plt.colorbar(plot)
82 |
83 | ax = fig.add_subplot(2, 2, 4)
84 | plot = ax.imshow(grasp_angle_img, cmap='hsv', vmin=-np.pi / 2, vmax=np.pi / 2)
85 | ax.set_title('Angle')
86 | ax.axis('off')
87 | plt.colorbar(plot)
88 | plt.show()
89 |
90 | return gs
91 |
92 | def compute_grasps(rgb_img, depth_img, grasp_q_img, grasp_angle_img, no_grasps=1, grasp_width_img=None, raw_size=(640, 480), grasp_mask=None, rel_file=None):
93 | """
94 | Plot the output of a GG-CNN
95 | :param rgb_img: RGB Image
96 | :param depth_img: Depth Image
97 | :param grasp_q_img: Q output of GG-CNN
98 | :param grasp_angle_img: Angle output of GG-CNN
99 | :param no_grasps: Maximum number of grasps to plot
100 | :param grasp_width_img: (optional) Width output of GG-CNN
101 | :return:
102 | """
103 | # # resize
104 | INTER_TYPE = cv2.INTER_CUBIC
105 | width_factor = (raw_size[0]/grasp_q_img.shape[0] + raw_size[1]/grasp_q_img.shape[1]) / 2
106 | grasp_q_img = cv2.resize(grasp_q_img, dsize=raw_size, interpolation=INTER_TYPE)
107 | grasp_angle_img = cv2.resize(grasp_angle_img, dsize=raw_size, interpolation=INTER_TYPE)
108 | grasp_width_img = cv2.resize(grasp_width_img, dsize=raw_size, interpolation=INTER_TYPE) * width_factor
109 | rgb_img = cv2.resize(rgb_img, dsize=raw_size, interpolation=INTER_TYPE)
110 | depth_img = cv2.resize(depth_img, dsize=raw_size, interpolation=INTER_TYPE)
111 |
112 | # TODO: save grasps
113 | if grasp_mask is not None:
114 | mask = np.load(grasp_mask)
115 | grasp_q_img_mask = grasp_q_img * mask
116 | gs = detect_grasps(grasp_q_img_mask, grasp_angle_img, width_img=grasp_width_img, no_grasps=no_grasps)
117 | else:
118 | gs = detect_grasps(grasp_q_img, grasp_angle_img, width_img=grasp_width_img, no_grasps=no_grasps)
119 |
120 | return gs
121 |
122 |
123 |
124 | def calculate_iou_match(grasp_q, grasp_angle, ground_truth_bbs, no_grasps=1, grasp_width=None):
125 | """
126 | Calculate grasp success using the IoU (Jacquard) metric (e.g. in https://arxiv.org/abs/1301.3592)
127 | A success is counted if grasp rectangle has a 25% IoU with a ground truth, and is withing 30 degrees.
128 | :param grasp_q: Q outputs of GG-CNN (Nx300x300x3)
129 | :param grasp_angle: Angle outputs of GG-CNN
130 | :param ground_truth_bbs: Corresponding ground-truth BoundingBoxes
131 | :param no_grasps: Maximum number of grasps to consider per image.
132 | :param grasp_width: (optional) Width output from GG-CNN
133 | :return: success
134 | """
135 |
136 | if not isinstance(ground_truth_bbs, GraspRectangles):
137 | gt_bbs = GraspRectangles.load_from_array(ground_truth_bbs)
138 | else:
139 | gt_bbs = ground_truth_bbs
140 | gs = detect_grasps(grasp_q, grasp_angle, width_img=grasp_width, no_grasps=no_grasps)
141 | for g in gs:
142 | if g.max_iou(gt_bbs) > 0.25:
143 | return True
144 | else:
145 | return False
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/generate_cornell_depth.py:
--------------------------------------------------------------------------------
1 | import glob
2 | import os
3 | import numpy as np
4 | from imageio import imsave
5 | import argparse
6 | from utils.dataset_processing.image import DepthImage
7 |
8 |
9 | if __name__ == '__main__':
10 | parser = argparse.ArgumentParser(description='Generate depth images from Cornell PCD files.')
11 | parser.add_argument('path', type=str, help='Path to Cornell Grasping Dataset')
12 | args = parser.parse_args()
13 |
14 | pcds = glob.glob(os.path.join(args.path, '*', 'pcd*[0-9].txt'))
15 | pcds.sort()
16 |
17 | for pcd in pcds:
18 | di = DepthImage.from_pcd(pcd, (480, 640))
19 | di.inpaint()
20 |
21 | of_name = pcd.replace('.txt', 'd.tiff')
22 | print(of_name)
23 | imsave(of_name, di.img.astype(np.float32))
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/grasp.py:
--------------------------------------------------------------------------------
1 | import pdb
2 |
3 | import numpy as np
4 |
5 | import matplotlib.pyplot as plt
6 |
7 | from skimage.draw import polygon
8 | from skimage.feature import peak_local_max
9 |
10 |
11 | def _gr_text_to_no(l, offset=(0, 0)):
12 | """
13 | Transform a single point from a Cornell file line to a pair of ints.
14 | :param l: Line from Cornell grasp file (str)
15 | :param offset: Offset to apply to point positions
16 | :return: Point [y, x]
17 | """
18 | x, y = l.split()
19 | return [int(round(float(y))) - offset[0], int(round(float(x))) - offset[1]]
20 |
21 |
22 | class GraspRectangles:
23 | """
24 | Convenience class for loading and operating on sets of Grasp Rectangles.
25 | """
26 | def __init__(self, grs=None):
27 | if grs:
28 | self.grs = grs
29 | else:
30 | self.grs = []
31 |
32 | def __getitem__(self, item):
33 | return self.grs[item]
34 |
35 | def __iter__(self):
36 | return self.grs.__iter__()
37 |
38 | def __getattr__(self, attr):
39 | """
40 | Test if GraspRectangle has the desired attr as a function and call it.
41 | """
42 | # Fuck yeah python.
43 | if hasattr(GraspRectangle, attr) and callable(getattr(GraspRectangle, attr)):
44 | return lambda *args, **kwargs: list(map(lambda gr: getattr(gr, attr)(*args, **kwargs), self.grs))
45 | else:
46 | raise AttributeError("Couldn't find function %s in BoundingBoxes or BoundingBox" % attr)
47 |
48 | @classmethod
49 | def load_from_array(cls, arr):
50 | """
51 | Load grasp rectangles from numpy array.
52 | :param arr: Nx4x2 array, where each 4x2 array is the 4 corner pixels of a grasp rectangle.
53 | :return: GraspRectangles()
54 | """
55 | grs = []
56 | for i in range(arr.shape[0]):
57 | grp = arr[i, :, :].squeeze()
58 | if grp.max() == 0:
59 | break
60 | else:
61 | grs.append(GraspRectangle(grp))
62 | return cls(grs)
63 |
64 | @classmethod
65 | def load_from_cornell_file(cls, fname):
66 | """
67 | Load grasp rectangles from a Cornell dataset grasp file.
68 | :param fname: Path to text file.
69 | :return: GraspRectangles()
70 | """
71 | grs = []
72 | with open(fname) as f:
73 | while True:
74 | # Load 4 lines at a time, corners of bounding box.
75 | p0 = f.readline()
76 | if not p0:
77 | break # EOF
78 | p1, p2, p3 = f.readline(), f.readline(), f.readline()
79 | try:
80 | gr = np.array([
81 | _gr_text_to_no(p0),
82 | _gr_text_to_no(p1),
83 | _gr_text_to_no(p2),
84 | _gr_text_to_no(p3)
85 | ])
86 |
87 | grs.append(GraspRectangle(gr))
88 |
89 | except ValueError:
90 | # Some files contain weird values.
91 | continue
92 | return cls(grs)
93 |
94 | @classmethod
95 | def load_from_jacquard_file(cls, fname, scale=1.0):
96 | """
97 | Load grasp rectangles from a Jacquard dataset file.
98 | :param fname: Path to file.
99 | :param scale: Scale to apply (e.g. if resizing images)
100 | :return: GraspRectangles()
101 | """
102 | grs = []
103 | with open(fname) as f:
104 | for l in f:
105 | x, y, theta, w, h = [float(v) for v in l[:-1].split(';')]
106 | # index based on row, column (y,x), and the Jacquard dataset's angles are flipped around an axis.
107 | grs.append(Grasp(np.array([y, x]), -theta/180.0*np.pi, w, h).as_gr)
108 | grs = cls(grs)
109 | grs.scale(scale)
110 | return grs
111 |
112 | def append(self, gr):
113 | """
114 | Add a grasp rectangle to this GraspRectangles object
115 | :param gr: GraspRectangle
116 | """
117 | self.grs.append(gr)
118 |
119 | def copy(self):
120 | """
121 | :return: A deep copy of this object and all of its GraspRectangles.
122 | """
123 | new_grs = GraspRectangles()
124 | for gr in self.grs:
125 | new_grs.append(gr.copy())
126 | return new_grs
127 |
128 | def show(self, ax=None, shape=None):
129 | """
130 | Draw all GraspRectangles on a matplotlib plot.
131 | :param ax: (optional) existing axis
132 | :param shape: (optional) Plot shape if no existing axis
133 | """
134 | if ax is None:
135 | f = plt.figure()
136 | ax = f.add_subplot(1, 1, 1)
137 | ax.imshow(np.zeros(shape))
138 | ax.axis([0, shape[1], shape[0], 0])
139 | self.plot(ax)
140 | plt.show()
141 | else:
142 | self.plot(ax)
143 |
144 | def draw(self, shape, position=True, angle=True, width=True):
145 | """
146 | Plot all GraspRectangles as solid rectangles in a numpy array, e.g. as network training data.
147 | :param shape: output shape
148 | :param position: If True, Q output will be produced
149 | :param angle: If True, Angle output will be produced
150 | :param width: If True, Width output will be produced
151 | :return: Q, Angle, Width outputs (or None)
152 | """
153 | if position:
154 | pos_out = np.zeros(shape)
155 | else:
156 | pos_out = None
157 | if angle:
158 | ang_out = np.zeros(shape)
159 | else:
160 | ang_out = None
161 | if width:
162 | width_out = np.zeros(shape)
163 | else:
164 | width_out = None
165 |
166 | for gr in self.grs:
167 | rr, cc = gr.compact_polygon_coords(shape)
168 | if position:
169 | pos_out[rr, cc] = 1.0
170 | if angle:
171 | ang_out[rr, cc] = gr.angle
172 | if width:
173 | width_out[rr, cc] = gr.length
174 |
175 | return pos_out, ang_out, width_out
176 |
177 | def to_array(self, pad_to=0):
178 | """
179 | Convert all GraspRectangles to a single array.
180 | :param pad_to: Length to 0-pad the array along the first dimension
181 | :return: Nx4x2 numpy array
182 | """
183 | a = np.stack([gr.points for gr in self.grs])
184 | if pad_to:
185 | if pad_to > len(self.grs):
186 | a = np.concatenate((a, np.zeros((pad_to - len(self.grs), 4, 2))))
187 | return a.astype(np.int)
188 |
189 | @property
190 | def center(self):
191 | """
192 | Compute mean center of all GraspRectangles
193 | :return: float, mean centre of all GraspRectangles
194 | """
195 | points = [gr.points for gr in self.grs]
196 | return np.mean(np.vstack(points), axis=0).astype(np.int)
197 |
198 |
199 | class GraspRectangle:
200 | """
201 | Representation of a grasp in the common "Grasp Rectangle" format.
202 | """
203 | def __init__(self, points):
204 | self.points = points
205 |
206 | def __str__(self):
207 | return str(self.points)
208 |
209 | @property
210 | def angle(self):
211 | """
212 | :return: Angle of the grasp to the horizontal.
213 | """
214 | dx = self.points[1, 1] - self.points[0, 1]
215 | dy = self.points[1, 0] - self.points[0, 0]
216 | return (np.arctan2(-dy, dx) + np.pi/2) % np.pi - np.pi/2
217 |
218 | @property
219 | def as_grasp(self):
220 | """
221 | :return: GraspRectangle converted to a Grasp
222 | """
223 | return Grasp(self.center, self.angle, self.length, self.width)
224 |
225 | @property
226 | def center(self):
227 | """
228 | :return: Rectangle center point
229 | """
230 | return self.points.mean(axis=0).astype(np.int)
231 |
232 | @property
233 | def length(self):
234 | """
235 | :return: Rectangle length (i.e. along the axis of the grasp)
236 | """
237 | dx = self.points[1, 1] - self.points[0, 1]
238 | dy = self.points[1, 0] - self.points[0, 0]
239 | return np.sqrt(dx ** 2 + dy ** 2)
240 |
241 | @property
242 | def width(self):
243 | """
244 | :return: Rectangle width (i.e. perpendicular to the axis of the grasp)
245 | """
246 | dy = self.points[2, 1] - self.points[1, 1]
247 | dx = self.points[2, 0] - self.points[1, 0]
248 | return np.sqrt(dx ** 2 + dy ** 2)
249 |
250 | def polygon_coords(self, shape=None):
251 | """
252 | :param shape: Output Shape
253 | :return: Indices of pixels within the grasp rectangle polygon.
254 | """
255 | return polygon(self.points[:, 0], self.points[:, 1], shape)
256 |
257 | def compact_polygon_coords(self, shape=None):
258 | """
259 | :param shape: Output shape
260 | :return: Indices of pixels within the centre thrid of the grasp rectangle.
261 | """
262 | return Grasp(self.center, self.angle, self.length/3, self.width).as_gr.polygon_coords(shape)
263 |
264 | def iou(self, gr, angle_threshold=np.pi/6):
265 | """
266 | Compute IoU with another grasping rectangle
267 | :param gr: GraspingRectangle to compare
268 | :param angle_threshold: Maximum angle difference between GraspRectangles
269 | :return: IoU between Grasp Rectangles
270 | """
271 | if abs((self.angle - gr.angle + np.pi/2) % np.pi - np.pi/2) > angle_threshold:
272 | return 0
273 |
274 | rr1, cc1 = self.polygon_coords()
275 | rr2, cc2 = polygon(gr.points[:, 0], gr.points[:, 1])
276 |
277 | try:
278 | r_max = max(rr1.max(), rr2.max()) + 1
279 | c_max = max(cc1.max(), cc2.max()) + 1
280 | except:
281 | return 0
282 |
283 | canvas = np.zeros((r_max, c_max))
284 | canvas[rr1, cc1] += 1
285 | canvas[rr2, cc2] += 1
286 | union = np.sum(canvas > 0)
287 | if union == 0:
288 | return 0
289 | intersection = np.sum(canvas == 2)
290 | return intersection/union
291 |
292 | def copy(self):
293 | """
294 | :return: Copy of self.
295 | """
296 | return GraspRectangle(self.points.copy())
297 |
298 | def offset(self, offset):
299 | """
300 | Offset grasp rectangle
301 | :param offset: array [y, x] distance to offset
302 | """
303 | self.points += np.array(offset).reshape((1, 2))
304 |
305 | def rotate(self, angle, center):
306 | """
307 | Rotate grasp rectangle
308 | :param angle: Angle to rotate (in radians)
309 | :param center: Point to rotate around (e.g. image center)
310 | """
311 | R = np.array(
312 | [
313 | [np.cos(-angle), np.sin(-angle)],
314 | [-1 * np.sin(-angle), np.cos(-angle)],
315 | ]
316 | )
317 | c = np.array(center).reshape((1, 2))
318 | self.points = ((np.dot(R, (self.points - c).T)).T + c).astype(np.int)
319 |
320 | def scale(self, factor):
321 | """
322 | :param factor: Scale grasp rectangle by factor
323 | """
324 | if factor == 1.0:
325 | return
326 | self.points *= factor
327 |
328 | def plot(self, ax, color=None):
329 | """
330 | Plot grasping rectangle.
331 | :param ax: Existing matplotlib axis
332 | :param color: matplotlib color code (optional)
333 | """
334 | points = np.vstack((self.points, self.points[0]))
335 | ax.plot(points[:, 1], points[:, 0], color=color)
336 |
337 | def zoom(self, factor, center):
338 | """
339 | Zoom grasp rectangle by given factor.
340 | :param factor: Zoom factor
341 | :param center: Zoom zenter (focus point, e.g. image center)
342 | """
343 | T = np.array(
344 | [
345 | [1/factor, 0],
346 | [0, 1/factor]
347 | ]
348 | )
349 | c = np.array(center).reshape((1, 2))
350 | self.points = ((np.dot(T, (self.points - c).T)).T + c).astype(np.int)
351 |
352 |
353 | class Grasp:
354 | """
355 | A Grasp represented by a center pixel, rotation angle and gripper width (length)
356 | """
357 | def __init__(self, center, angle, quality, length=60, width=30):
358 | self.center = center
359 | self.angle = angle # Positive angle means rotate anti-clockwise from horizontal.
360 | self.length = length
361 | self.width = width
362 | self.quality = quality
363 |
364 | @property
365 | def as_gr(self):
366 | """
367 | Convert to GraspRectangle
368 | :return: GraspRectangle representation of grasp.
369 | """
370 | xo = np.cos(self.angle)
371 | yo = np.sin(self.angle)
372 |
373 | y1 = self.center[0] + self.length / 2 * yo
374 | x1 = self.center[1] - self.length / 2 * xo
375 | y2 = self.center[0] - self.length / 2 * yo
376 | x2 = self.center[1] + self.length / 2 * xo
377 |
378 | return GraspRectangle(np.array(
379 | [
380 | [y1 - self.width/2 * xo, x1 - self.width/2 * yo],
381 | [y2 - self.width/2 * xo, x2 - self.width/2 * yo],
382 | [y2 + self.width/2 * xo, x2 + self.width/2 * yo],
383 | [y1 + self.width/2 * xo, x1 + self.width/2 * yo],
384 | ]
385 | ).astype(np.float))
386 |
387 | def max_iou(self, grs):
388 | """
389 | Return maximum IoU between self and a list of GraspRectangles
390 | :param grs: List of GraspRectangles
391 | :return: Maximum IoU with any of the GraspRectangles
392 | """
393 | self_gr = self.as_gr
394 | max_iou = 0
395 | for gr in grs:
396 | iou = self_gr.iou(gr)
397 | max_iou = max(max_iou, iou)
398 | return max_iou
399 |
400 | def plot(self, ax, color=None):
401 | """
402 | Plot Grasp
403 | :param ax: Existing matplotlib axis
404 | :param color: (optional) color
405 | """
406 | self.as_gr.plot(ax, color)
407 |
408 | def to_jacquard(self, scale=1):
409 | """
410 | Output grasp in "Jacquard Dataset Format" (https://jacquard.liris.cnrs.fr/database.php)
411 | :param scale: (optional) scale to apply to grasp
412 | :return: string in Jacquard format
413 | """
414 | # Output in jacquard format.
415 | return '%0.2f;%0.2f;%0.2f;%0.2f;%0.2f' % (self.center[1]*scale, self.center[0]*scale, -1*self.angle*180/np.pi, self.length*scale, self.width*scale)
416 |
417 |
418 | def detect_grasps(q_img, ang_img, width_img=None, no_grasps=1):
419 | """
420 | Detect grasps in a GG-CNN output.
421 | :param q_img: Q image network output
422 | :param ang_img: Angle image network output
423 | :param width_img: (optional) Width image network output
424 | :param no_grasps: Max number of grasps to return
425 | :return: list of Grasps
426 | """
427 | local_max = peak_local_max(q_img, min_distance=30, threshold_abs=0.2, num_peaks=no_grasps)
428 |
429 | grasps = []
430 | for grasp_point_array in local_max:
431 | grasp_point = tuple(grasp_point_array)
432 |
433 | grasp_angle = ang_img[grasp_point]
434 | grasp_quality = q_img[grasp_point]
435 |
436 | g = Grasp(grasp_point, grasp_angle, grasp_quality)
437 | if width_img is not None:
438 | g.length = width_img[grasp_point]
439 | g.width = g.length/2
440 |
441 | grasps.append(g)
442 |
443 | return grasps
444 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/dataset_processing/image.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 | import matplotlib.pyplot as plt
4 | import imageio
5 | from imageio import imread
6 | from skimage.transform import rotate, resize
7 |
8 | import warnings
9 | warnings.filterwarnings("ignore", category=UserWarning)
10 |
11 |
12 | class Image:
13 | """
14 | Wrapper around an image with some convenient functions.
15 | """
16 | def __init__(self, img):
17 | self.img = img
18 |
19 | def __getattr__(self, attr):
20 | # Pass along any other methods to the underlying ndarray
21 | return getattr(self.img, attr)
22 |
23 | @classmethod
24 | def from_file(cls, fname):
25 | return cls(imread(fname))
26 |
27 | @classmethod
28 | def from_array(cls, image_array):
29 | return cls(imageio.core.util.Array(image_array))
30 |
31 | def copy(self):
32 | """
33 | :return: Copy of self.
34 | """
35 | return self.__class__(self.img.copy())
36 |
37 | def crop(self, top_left, bottom_right, resize=None):
38 | """
39 | Crop the image to a bounding box given by top left and bottom right pixels.
40 | :param top_left: tuple, top left pixel.
41 | :param bottom_right: tuple, bottom right pixel
42 | :param resize: If specified, resize the cropped image to this size
43 | """
44 | self.img = self.img[top_left[0]:bottom_right[0], top_left[1]:bottom_right[1]]
45 | if resize is not None:
46 | self.resize(resize)
47 |
48 | def cropped(self, *args, **kwargs):
49 | """
50 | :return: Cropped copy of the image.
51 | """
52 | i = self.copy()
53 | i.crop(*args, **kwargs)
54 | return i
55 |
56 | def normalise(self):
57 | """
58 | Normalise the image by converting to float [0,1] and zero-centering
59 | """
60 | self.img = self.img.astype(np.float32)/255.0
61 | self.img -= self.img.mean()
62 |
63 | def resize(self, shape):
64 | """
65 | Resize image to shape.
66 | :param shape: New shape.
67 | """
68 | if self.img.shape == shape:
69 | return
70 | self.img = resize(self.img, shape, preserve_range=True).astype(self.img.dtype)
71 |
72 | def resized(self, *args, **kwargs):
73 | """
74 | :return: Resized copy of the image.
75 | """
76 | i = self.copy()
77 | i.resize(*args, **kwargs)
78 | return i
79 |
80 | def rotate(self, angle, center=None):
81 | """
82 | Rotate the image.
83 | :param angle: Angle (in radians) to rotate by.
84 | :param center: Center pixel to rotate if specified, otherwise image center is used.
85 | """
86 | if center is not None:
87 | center = (center[1], center[0])
88 | self.img = rotate(self.img, angle/np.pi*180, center=center, mode='symmetric', preserve_range=True).astype(self.img.dtype)
89 |
90 | def rotated(self, *args, **kwargs):
91 | """
92 | :return: Rotated copy of image.
93 | """
94 | i = self.copy()
95 | i.rotate(*args, **kwargs)
96 | return i
97 |
98 | def show(self, ax=None, **kwargs):
99 | """
100 | Plot the image
101 | :param ax: Existing matplotlib axis (optional)
102 | :param kwargs: kwargs to imshow
103 | """
104 | if ax:
105 | ax.imshow(self.img, **kwargs)
106 | else:
107 | plt.imshow(self.img, **kwargs)
108 | plt.show()
109 |
110 | def zoom(self, factor):
111 | """
112 | "Zoom" the image by cropping and resizing.
113 | :param factor: Factor to zoom by. e.g. 0.5 will keep the center 50% of the image.
114 | """
115 | sr = int(self.img.shape[0] * (1 - factor)) // 2
116 | sc = int(self.img.shape[1] * (1 - factor)) // 2
117 | orig_shape = self.img.shape
118 | self.img = self.img[sr:self.img.shape[0] - sr, sc: self.img.shape[1] - sc].copy()
119 | self.img = resize(self.img, orig_shape, mode='symmetric', preserve_range=True).astype(self.img.dtype)
120 |
121 | def zoomed(self, *args, **kwargs):
122 | """
123 | :return: Zoomed copy of the image.
124 | """
125 | i = self.copy()
126 | i.zoom(*args, **kwargs)
127 | return i
128 |
129 |
130 | class DepthImage(Image):
131 | def __init__(self, img):
132 | super().__init__(img)
133 |
134 | @classmethod
135 | def from_pcd(cls, pcd_filename, shape, default_filler=0, index=None):
136 | """
137 | Create a depth image from an unstructured PCD file.
138 | If index isn't specified, use euclidean distance, otherwise choose x/y/z=0/1/2
139 | """
140 | img = np.zeros(shape)
141 | if default_filler != 0:
142 | img += default_filler
143 |
144 | with open(pcd_filename) as f:
145 | for l in f.readlines():
146 | ls = l.split()
147 |
148 | if len(ls) != 5:
149 | # Not a point line in the file.
150 | continue
151 | try:
152 | # Not a number, carry on.
153 | float(ls[0])
154 | except ValueError:
155 | continue
156 |
157 | i = int(ls[4])
158 | r = i // shape[1]
159 | c = i % shape[1]
160 |
161 | if index is None:
162 | x = float(ls[0])
163 | y = float(ls[1])
164 | z = float(ls[2])
165 |
166 | img[r, c] = np.sqrt(x ** 2 + y ** 2 + z ** 2)
167 |
168 | else:
169 | img[r, c] = float(ls[index])
170 |
171 | return cls(img/1000.0)
172 |
173 | @classmethod
174 | def from_tiff(cls, fname):
175 | return cls(imread(fname))
176 |
177 | @classmethod
178 | def from_array(cls, image_array):
179 | return cls(imageio.core.util.Array(image_array))
180 |
181 | def inpaint(self, missing_value=0):
182 | """
183 | Inpaint missing values in depth image.
184 | :param missing_value: Value to fill in teh depth image.
185 | """
186 | # cv2 inpainting doesn't handle the border properly
187 | # https://stackoverflow.com/questions/25974033/inpainting-depth-map-still-a-black-image-border
188 | self.img = cv2.copyMakeBorder(self.img, 1, 1, 1, 1, cv2.BORDER_DEFAULT)
189 | mask = (self.img == missing_value).astype(np.uint8)
190 |
191 | # Scale to keep as float, but has to be in bounds -1:1 to keep opencv happy.
192 | scale = np.abs(self.img).max()
193 | self.img = self.img.astype(np.float32) / scale # Has to be float32, 64 not supported.
194 | self.img = cv2.inpaint(self.img, mask, 1, cv2.INPAINT_NS)
195 |
196 | # Back to original size and value range.
197 | self.img = self.img[1:-1, 1:-1]
198 | self.img = self.img * scale
199 |
200 | def gradients(self):
201 | """
202 | Compute gradients of the depth image using Sobel filtesr.
203 | :return: Gradients in X direction, Gradients in Y diretion, Magnitude of XY gradients.
204 | """
205 | grad_x = cv2.Sobel(self.img, cv2.CV_64F, 1, 0, borderType=cv2.BORDER_DEFAULT)
206 | grad_y = cv2.Sobel(self.img, cv2.CV_64F, 0, 1, borderType=cv2.BORDER_DEFAULT)
207 | grad = np.sqrt(grad_x ** 2 + grad_y ** 2)
208 |
209 | return DepthImage(grad_x), DepthImage(grad_y), DepthImage(grad)
210 |
211 | def normalise(self):
212 | """
213 | Normalise by subtracting the mean and clippint [-1, 1]
214 | """
215 | self.img = np.clip((self.img - self.img.mean()), -1, 1)
216 |
217 |
218 | class WidthImage(Image):
219 | """
220 | A width image is one that describes the desired gripper width at each pixel.
221 | """
222 | def zoom(self, factor):
223 | """
224 | "Zoom" the image by cropping and resizing. Also scales the width accordingly.
225 | :param factor: Factor to zoom by. e.g. 0.5 will keep the center 50% of the image.
226 | """
227 | super().zoom(factor)
228 | self.img = self.img/factor
229 |
230 | def normalise(self):
231 | """
232 | Normalise by mapping [0, 150] -> [0, 1]
233 | """
234 | self.img = np.clip(self.img, 0, 150.0)/150.0
235 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/timeit.py:
--------------------------------------------------------------------------------
1 | import time
2 |
3 |
4 | class TimeIt:
5 | print_output = True
6 | last_parent = None
7 | level = -1
8 |
9 | def __init__(self, s):
10 | self.s = s
11 | self.t0 = None
12 | self.t1 = None
13 | self.outputs = []
14 | self.parent = None
15 |
16 | def __enter__(self):
17 | self.t0 = time.time()
18 | self.parent = TimeIt.last_parent
19 | TimeIt.last_parent = self
20 | TimeIt.level += 1
21 |
22 | def __exit__(self, t, value, traceback):
23 | self.t1 = time.time()
24 | st = '%s%s: %0.1fms' % (' ' * TimeIt.level, self.s, (self.t1 - self.t0)*1000)
25 | TimeIt.level -= 1
26 |
27 | if self.parent:
28 | self.parent.outputs.append(st)
29 | self.parent.outputs += self.outputs
30 | else:
31 | if TimeIt.print_output:
32 | print(st)
33 | for o in self.outputs:
34 | print(o)
35 | self.outputs = []
36 |
37 | TimeIt.last_parent = self.parent
38 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/visualisation/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Robotics-and-Computer-Vision-Lab/HandEyeCalib-ROS/581f06bf3f0c51d88c573097f5fd6f58d86bc1a1/planr_grasp_ggcnn/utils/visualisation/__init__.py
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/utils/visualisation/gridshow.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 |
4 |
5 | def gridshow(name, imgs, scales, cmaps, width, border=10):
6 | """
7 | Display images in a grid.
8 | :param name: cv2 Window Name to update
9 | :param imgs: List of Images (np.ndarrays)
10 | :param scales: The min/max scale of images to properly scale the colormaps
11 | :param cmaps: List of cv2 Colormaps to apply
12 | :param width: Number of images in a row
13 | :param border: Border (pixels) between images.
14 | """
15 | imgrows = []
16 | imgcols = []
17 |
18 | maxh = 0
19 | for i, (img, cmap, scale) in enumerate(zip(imgs, cmaps, scales)):
20 |
21 | # Scale images into range 0-1
22 | if scale is not None:
23 | img = (np.clip(img, scale[0], scale[1]) - scale[0])/(scale[1]-scale[0])
24 | elif img.dtype == np.float:
25 | img = (img - img.min())/(img.max() - img.min() + 1e-6)
26 |
27 | # Apply colormap (if applicable) and convert to uint8
28 | if cmap is not None:
29 | try:
30 | imgc = cv2.applyColorMap((img * 255).astype(np.uint8), cmap)
31 | except:
32 | imgc = (img*255.0).astype(np.uint8)
33 | else:
34 | imgc = img
35 |
36 | if imgc.shape[0] == 3:
37 | imgc = imgc.transpose((1, 2, 0))
38 | elif imgc.shape[0] == 4:
39 | imgc = imgc[1:, :, :].transpose((1, 2, 0))
40 |
41 | # Arrange row of images.
42 | maxh = max(maxh, imgc.shape[0])
43 | imgcols.append(imgc)
44 | if i > 0 and i % width == (width-1):
45 | imgrows.append(np.hstack([np.pad(c, ((0, maxh - c.shape[0]), (border//2, border//2), (0, 0)), mode='constant') for c in imgcols]))
46 | imgcols = []
47 | maxh = 0
48 |
49 | # Unfinished row
50 | if imgcols:
51 | imgrows.append(np.hstack([np.pad(c, ((0, maxh - c.shape[0]), (border//2, border//2), (0, 0)), mode='constant') for c in imgcols]))
52 |
53 | maxw = max([c.shape[1] for c in imgrows])
54 |
55 | cv2.imshow(name, np.vstack([np.pad(r, ((border//2, border//2), (0, maxw - r.shape[1]), (0, 0)), mode='constant') for r in imgrows]))
56 |
--------------------------------------------------------------------------------
/planr_grasp_ggcnn/vis_ggcnn.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import logging
3 | import pyrealsense2 as rs
4 |
5 | import torch.utils.data
6 |
7 | from models.common import post_process_output
8 | from utils.dataset_processing import evaluation, grasp
9 | from utils.data import get_dataset
10 | from utils.dataset_processing import grasp, image
11 | from matplotlib import pyplot as plt
12 | import numpy as np
13 | import os
14 | import json
15 | import pdb
16 | import cv2
17 | import tifffile
18 | import math
19 | import time
20 |
21 | import rospy
22 | from std_msgs.msg import String, Float64MultiArray
23 |
24 | logging.basicConfig(level=logging.INFO)
25 |
26 |
27 | def parse_args():
28 | parser = argparse.ArgumentParser(description='Evaluate GG-CNN')
29 |
30 | # Network
31 | parser.add_argument('--network', type=str, default='ggcnn_weights_cornell/ggcnn_epoch_23_cornell', help='Path to saved network to evaluate')
32 | parser.add_argument('--num-workers', type=int, default=8, help='Dataset workers')
33 | parser.add_argument('--n-grasps', type=int, default=1, help='Number of grasps to consider per image')
34 |
35 | args = parser.parse_args()
36 |
37 | return args
38 |
39 | def get_depth(depth_file, rot=0, zoom=1.0, output_size=300):
40 | depth_img = image.DepthImage.from_tiff(depth_file)
41 | depth_img.inpaint()
42 | depth_img.rotate(rot)
43 | depth_img.normalise()
44 | depth_img.zoom(zoom)
45 | depth_img.resize((output_size, output_size))
46 | return depth_img.img
47 |
48 |
49 | def get_rgb(rgb_file, rot=0, zoom=1.0, output_size=300, normalise=True):
50 | rgb_img = image.Image.from_file(rgb_file)
51 | rgb_img.rotate(rot)
52 | rgb_img.zoom(zoom)
53 | rgb_img.resize((output_size, output_size))
54 | if normalise:
55 | rgb_img.normalise()
56 | rgb_img.img = rgb_img.img.transpose((2, 0, 1))
57 | return rgb_img.img
58 |
59 | def get_rgb_from_array(rgb_array, rot=0, zoom=1.0, output_size=300, normalise=True):
60 | rgb_img = image.Image.from_array(rgb_array)
61 | rgb_img.rotate(rot)
62 | rgb_img.zoom(zoom)
63 | rgb_img.resize((output_size, output_size))
64 | if normalise:
65 | rgb_img.normalise()
66 | rgb_img.img = rgb_img.img.transpose((2, 0, 1))
67 | return rgb_img.img
68 |
69 | def get_depth_from_array(depth_array, rot=0, zoom=1.0, output_size=300):
70 | depth_img = image.DepthImage.from_array(depth_array)
71 | depth_img.inpaint()
72 | depth_img.rotate(rot)
73 | depth_img.normalise()
74 | depth_img.zoom(zoom)
75 | depth_img.resize((output_size, output_size))
76 | return depth_img.img
77 |
78 | def numpy_to_torch(s):
79 | if len(s.shape) == 2:
80 | return torch.from_numpy(np.expand_dims(s, 0).astype(np.float32))
81 | else:
82 | return torch.from_numpy(s.astype(np.float32))
83 |
84 | def rotate_points(pt, center, th):
85 | """
86 | th -> -th
87 | """
88 | x, y = pt
89 | cx, cy = center
90 |
91 | x -= cx
92 | y -= cy
93 |
94 | xx = x
95 | x = x*math.cos(-th) - y*math.sin(-th)
96 | y = xx * math.sin(-th) + y*math.cos(-th)
97 |
98 | x += cx
99 | y += cy
100 |
101 | return [x,y]
102 |
103 |
104 | if __name__ == '__main__':
105 | args = parse_args()
106 |
107 | net = torch.load(args.network)
108 | device = torch.device("cuda:0")
109 |
110 | # load depth input
111 | BasePath = '/home/tc/Github_repo/ggcnn/samples'
112 | # rgb_f = os.path.join(BasePath, '4.png')
113 | # rgb_im = get_rgb(rgb_f, normalise=False)
114 | # depth_f = rgb_f.replace('png', 'tiff')
115 | # depth_im = get_depth(depth_f)
116 | count = len(os.listdir(BasePath))
117 | img_index = count // 2
118 |
119 | # init rosnode and topic publisher
120 | rospy.init_node('ggcnn')
121 | grasp_pose_pub = rospy.Publisher('/ggcnn_grasp_pose', Float64MultiArray, queue_size=10)
122 |
123 | # RGB and dpeth strams
124 | pipeline = rs.pipeline()
125 | config = rs.config()
126 | config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
127 | config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
128 |
129 | # Start streaming
130 | profile = pipeline.start(config)
131 |
132 | # Get depth camera's depth scale
133 | depth_sensor = profile.get_device().first_depth_sensor()
134 | depth_scale = depth_sensor.get_depth_scale()
135 | print('Depth Scale is: ' + str(depth_scale))
136 |
137 | # Align depth frame to RGB frame
138 | align_to = rs.stream.color
139 | align = rs.align(align_to)
140 |
141 | sensor = pipeline.get_active_profile().get_device().query_sensors()[1]
142 | sensor.set_option(rs.option.exposure, 250)
143 | sensor.set_option(rs.option.auto_exposure_priority, True)
144 |
145 | # grasp drawing params
146 | radius = 1
147 | color = (0, 0, 255)
148 | thickness = 2
149 | isClosed = True
150 |
151 |
152 | while True:
153 |
154 | # Wait for a coherent pair of frames: depth and color
155 | frames = pipeline.wait_for_frames()
156 | # Align the depth frame to color frame
157 | aligned_frames = align.process(frames)
158 | # Get aligned frames
159 | aligned_depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth image
160 | color_frame = aligned_frames.get_color_frame()
161 |
162 | # get instrinc params
163 | # color_profile = color_frame.get_profile()
164 | # cvsprofile = rs.video_stream_profile(color_profile)
165 | # color_intrin = cvsprofile.get_intrinsics()
166 | # color_intrin_part = [color_intrin.ppx, color_intrin.ppy, color_intrin.fx, color_intrin.fy]
167 | # print(color_intrin_part)
168 |
169 | # Validate that both frames are valid
170 | if not aligned_depth_frame or not color_frame:
171 | raise Exception("Invalid depth or rgb data.")
172 |
173 | depth_array = np.asanyarray(aligned_depth_frame.get_data())
174 | depth_array = (depth_array/1000.0).astype(np.float32)
175 | color_array = np.asanyarray(color_frame.get_data()).astype(np.uint8)
176 |
177 | rgb_im = get_rgb_from_array(color_array, normalise=False)
178 | depth_im = get_depth_from_array(depth_array)
179 |
180 | depth_tensor = numpy_to_torch(depth_im).unsqueeze(0).to(device)
181 | grasp_mask = os.path.join(BasePath, 'output/grasp_mask.npy')
182 | rel_file = os.path.join(BasePath, 'output/point_rel.json')
183 |
184 | # test on depth input
185 | TOG_FLAG = False
186 | with torch.no_grad():
187 | preds = net.compute_grasp(depth_tensor)
188 | q_img, ang_img, width_img = post_process_output(preds['pos'], preds['cos'], preds['sin'], preds['width'])
189 | if TOG_FLAG:
190 | out_grasps = evaluation.plot_output(rgb_im, depth_im, q_img,
191 | ang_img, no_grasps=args.n_grasps, grasp_width_img=width_img, grasp_mask=grasp_mask, rel_file=rel_file)
192 | else:
193 | # out_grasps = evaluation.plot_output(rgb_im, depth_im, q_img,
194 | # ang_img, no_grasps=args.n_grasps, grasp_width_img=width_img)
195 | out_grasps = evaluation.compute_grasps(rgb_im, depth_im, q_img,
196 | ang_img, no_grasps=args.n_grasps, grasp_width_img=width_img)
197 |
198 |
199 | # output predicted grasp parameters
200 | TOG = 'task-oriented' if TOG_FLAG else 'task-free'
201 | print("Predicting %s grasps:" %TOG)
202 | for grasp in out_grasps:
203 | print("center: " + str(grasp.center) + ", quality: %f, depth: %f, width: %f, length: %f, angle: %f" % (grasp.quality, depth_array[grasp.center[0], grasp.center[1]], grasp.width, grasp.length, grasp.angle))
204 |
205 | grasp_topic = Float64MultiArray(data=[np.float64(grasp.center[0]), np.float64(grasp.center[1]), depth_array[grasp.center[0], grasp.center[1]], grasp.angle, grasp.length, grasp.quality])
206 | grasp_pose_pub.publish(grasp_topic)
207 |
208 | # rotate grasp points
209 | pts = np.array([
210 |
211 | rotate_points((grasp.center[1]-grasp.length//2, grasp.center[0]-grasp.width//2,
212 | ), (grasp.center[1], grasp.center[0]), grasp.angle),
213 |
214 | rotate_points((grasp.center[1]+grasp.length//2, grasp.center[0]-grasp.width//2,
215 | ), (grasp.center[1], grasp.center[0]), grasp.angle),
216 |
217 | rotate_points((grasp.center[1]+grasp.length//2, grasp.center[0]+grasp.width//2,
218 | ), (grasp.center[1], grasp.center[0]), grasp.angle),
219 |
220 | rotate_points((grasp.center[1]-grasp.length//2, grasp.center[0]+grasp.width//2,
221 | ), (grasp.center[1], grasp.center[0]), grasp.angle)], np.int32)
222 |
223 | pts = pts.reshape((-1, 1, 2))
224 |
225 |
226 | # draw computed grasps in real time
227 | cv2.polylines(color_array, [pts],
228 | isClosed, color, thickness)
229 | cv2.circle(color_array, (grasp.center[1], grasp.center[0]), radius, color, thickness)
230 | cv2.imshow('detected grasps', color_array)
231 |
232 |
233 | key = cv2.waitKey(1) & 0xFF
234 | if key == 27:
235 | cv2.destroyAllWindows()
236 | break
237 | elif key == ord('s'):
238 | depth_name = str(img_index)+'.tiff'
239 | color_name = str(img_index) + '.png'
240 | cv2.imwrite(os.path.join(BasePath, color_name), color_array)
241 | tifffile.imsave(os.path.join(BasePath, depth_name), depth_array)
242 | print("RGB and depth images %d saved" % img_index)
243 | img_index += 1
244 |
245 |
--------------------------------------------------------------------------------