├── .gitignore
├── LICENSE
├── README.md
├── cad
├── kinova_mount.stl
├── realsense_mount.stl
└── realsense_mount_2.stl
├── ggcnn_kinova_grasping
├── CMakeLists.txt
├── launch
│ └── wrist_camera.launch
├── package.xml
└── scripts
│ ├── .gitignore
│ ├── helpers
│ ├── __init__.py
│ ├── covariance.py
│ ├── gripper_action_client.py
│ ├── position_action_client.py
│ └── transforms.py
│ ├── kinova_closed_loop_grasp.py
│ ├── kinova_open_loop_grasp.py
│ └── run_ggcnn.py
└── requirements.txt
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | .idea
3 | .python-version
4 | *.pyc
5 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (c) 2018, Douglas Morrison, ARC Centre of Excellence for Robotic Vision (ACRV), Queensland University of Technology
2 | All rights reserved.
3 |
4 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
5 |
6 | 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
7 |
8 | 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
9 |
10 | 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
11 |
12 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Generative Grasping CNN (GG-CNN) Grasping with Kinova Mico
2 |
3 | This repository contains a ROS package for running the GG-CNN grasping pipeline on a Kinova Mico arm. For the GG-CNN implementation and training, please see [https://github.com/dougsm/ggcnn](https://github.com/dougsm/ggcnn).
4 |
5 | The GG-CNN is a lightweight, fully-convolutional network which predicts the quality and pose of antipodal grasps at every pixel in an input depth image. The lightweight and single-pass generative nature of GG-CNN allows for fast execution and closed-loop control, enabling accurate grasping in dynamic environments where objects are moved during the grasp attempt.
6 |
7 | ## Paper
8 |
9 | **Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach**
10 |
11 | *[Douglas Morrison](http://dougsm.com), [Peter Corke](http://petercorke.com), [Jürgen Leitner](http://juxi.net)*
12 |
13 | Robotics: Science and Systems (RSS) 2018
14 |
15 | [arXiv](https://arxiv.org/abs/1804.05172) | [Video](https://www.youtube.com/watch?v=7nOoxuGEcxA)
16 |
17 | If you use this work, please cite:
18 |
19 | ```text
20 | @article{morrison2018closing,
21 | title={Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach},
22 | author={Morrison, Douglas and Corke, Peter and Leitner, Jürgen},
23 | booktitle={Robotics: Science and Systems (RSS)},
24 | year={2018}
25 | }
26 | ```
27 |
28 |
29 | ## Installation
30 |
31 | This code was developed with Python 2.7 on Ubuntu 16.04 with ROS Kinetic. Python requirements can be found in `requirements.txt`.
32 |
33 | You will also require the [Kinova ROS Packages](https://github.com/Kinovarobotics/kinova-ros) and [Realsense Camera Packages](http://wiki.ros.org/realsense_camera).
34 |
35 | A 3D printed mount for the Intel Realsense SR300 on the Kinova Mico arm can be found in the `cad` folder.
36 |
37 | ## GG-CNN Model
38 |
39 | See [https://github.com/dougsm/ggcnn](https://github.com/dougsm/ggcnn) for instructions for downloading or training the GG-CNN model.
40 |
41 | ## Running
42 |
43 | This implementation is specific to a Kinova Mico robot and Intel Realsense SR300 camera.
44 |
45 | Once the ROS package is compiled and sourced:
46 |
47 | 1. Lanuch the robot `roslaunch kinova_bringup kinova_robot.launch kinova_robotType:=m1n6s200`
48 | 2. Start the camera `roslaunch ggcnn_kinova_grasping wrist_camera.launch`
49 | 3. Run the GG-CNN node `rosrun ggcnn_kinova_grasping run_ggcnn.py`
50 | 4. To perform open-loop grasping, run `rosrun ggcnn_kinova_grasping kinova_open_loop_grasp.py`, or to perform closed-loop grasping run `rosrun kinova_closed_loop_grasp.py`.
51 |
52 | **Contact**
53 |
54 | Any questions or comments contact [Doug Morrison](mailto:doug.morrison@roboticvision.org).
55 |
--------------------------------------------------------------------------------
/cad/kinova_mount.stl:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dougsm/ggcnn_kinova_grasping/61f8970f2a190e44c9732d6064cdc72c7a49adaf/cad/kinova_mount.stl
--------------------------------------------------------------------------------
/cad/realsense_mount.stl:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dougsm/ggcnn_kinova_grasping/61f8970f2a190e44c9732d6064cdc72c7a49adaf/cad/realsense_mount.stl
--------------------------------------------------------------------------------
/cad/realsense_mount_2.stl:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dougsm/ggcnn_kinova_grasping/61f8970f2a190e44c9732d6064cdc72c7a49adaf/cad/realsense_mount_2.stl
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/CMakeLists.txt:
--------------------------------------------------------------------------------
1 | cmake_minimum_required(VERSION 2.8.3)
2 | project(ggcnn_kinova_grasping)
3 |
4 | ## Compile as C++11, supported in ROS Kinetic and newer
5 | # add_compile_options(-std=c++11)
6 |
7 | ## Find catkin macros and libraries
8 | ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
9 | ## is used, also find other catkin packages
10 | find_package(catkin REQUIRED COMPONENTS
11 | geometry_msgs
12 | roscpp
13 | rospy
14 | std_msgs
15 | )
16 |
17 | ## System dependencies are found with CMake's conventions
18 | # find_package(Boost REQUIRED COMPONENTS system)
19 |
20 |
21 | ## Uncomment this if the package has a setup.py. This macro ensures
22 | ## modules and global scripts declared therein get installed
23 | ## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
24 | # catkin_python_setup()
25 |
26 | ################################################
27 | ## Declare ROS messages, services and actions ##
28 | ################################################
29 |
30 | ## To declare and build messages, services or actions from within this
31 | ## package, follow these steps:
32 | ## * Let MSG_DEP_SET be the set of packages whose message types you use in
33 | ## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
34 | ## * In the file package.xml:
35 | ## * add a build_depend tag for "message_generation"
36 | ## * add a build_depend and a run_depend tag for each package in MSG_DEP_SET
37 | ## * If MSG_DEP_SET isn't empty the following dependency has been pulled in
38 | ## but can be declared for certainty nonetheless:
39 | ## * add a run_depend tag for "message_runtime"
40 | ## * In this file (CMakeLists.txt):
41 | ## * add "message_generation" and every package in MSG_DEP_SET to
42 | ## find_package(catkin REQUIRED COMPONENTS ...)
43 | ## * add "message_runtime" and every package in MSG_DEP_SET to
44 | ## catkin_package(CATKIN_DEPENDS ...)
45 | ## * uncomment the add_*_files sections below as needed
46 | ## and list every .msg/.srv/.action file to be processed
47 | ## * uncomment the generate_messages entry below
48 | ## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)
49 |
50 | ## Generate messages in the 'msg' folder
51 | # add_message_files(
52 | # FILES
53 | # Message1.msg
54 | # Message2.msg
55 | # )
56 |
57 | ## Generate services in the 'srv' folder
58 | # add_service_files(
59 | # FILES
60 | # Service1.srv
61 | # Service2.srv
62 | # )
63 |
64 | ## Generate actions in the 'action' folder
65 | # add_action_files(
66 | # FILES
67 | # Action1.action
68 | # Action2.action
69 | # )
70 |
71 | ## Generate added messages and services with any dependencies listed here
72 | # generate_messages(
73 | # DEPENDENCIES
74 | # geometry_msgs# std_msgs
75 | # )
76 |
77 | ################################################
78 | ## Declare ROS dynamic reconfigure parameters ##
79 | ################################################
80 |
81 | ## To declare and build dynamic reconfigure parameters within this
82 | ## package, follow these steps:
83 | ## * In the file package.xml:
84 | ## * add a build_depend and a run_depend tag for "dynamic_reconfigure"
85 | ## * In this file (CMakeLists.txt):
86 | ## * add "dynamic_reconfigure" to
87 | ## find_package(catkin REQUIRED COMPONENTS ...)
88 | ## * uncomment the "generate_dynamic_reconfigure_options" section below
89 | ## and list every .cfg file to be processed
90 |
91 | ## Generate dynamic reconfigure parameters in the 'cfg' folder
92 | # generate_dynamic_reconfigure_options(
93 | # cfg/DynReconf1.cfg
94 | # cfg/DynReconf2.cfg
95 | # )
96 |
97 | ###################################
98 | ## catkin specific configuration ##
99 | ###################################
100 | ## The catkin_package macro generates cmake config files for your package
101 | ## Declare things to be passed to dependent projects
102 | ## INCLUDE_DIRS: uncomment this if your package contains header files
103 | ## LIBRARIES: libraries you create in this project that dependent projects also need
104 | ## CATKIN_DEPENDS: catkin_packages dependent projects also need
105 | ## DEPENDS: system dependencies of this project that dependent projects also need
106 | catkin_package(
107 | # INCLUDE_DIRS include
108 | # LIBRARIES ggcnn_kinova_grasping
109 | # CATKIN_DEPENDS geometry_msgs roscpp rospy std_msgs
110 | # DEPENDS system_lib
111 | )
112 |
113 | ###########
114 | ## Build ##
115 | ###########
116 |
117 | ## Specify additional locations of header files
118 | ## Your package locations should be listed before other locations
119 | include_directories(
120 | # include
121 | ${catkin_INCLUDE_DIRS}
122 | )
123 |
124 | ## Declare a C++ library
125 | # add_library(${PROJECT_NAME}
126 | # src/${PROJECT_NAME}/ggcnn_kinova_grasping.cpp
127 | # )
128 |
129 | ## Add cmake target dependencies of the library
130 | ## as an example, code may need to be generated before libraries
131 | ## either from message generation or dynamic reconfigure
132 | # add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
133 |
134 | ## Declare a C++ executable
135 | ## With catkin_make all packages are built within a single CMake context
136 | ## The recommended prefix ensures that target names across packages don't collide
137 | # add_executable(${PROJECT_NAME}_node src/ggcnn_kinova_grasping_node.cpp)
138 |
139 | ## Rename C++ executable without prefix
140 | ## The above recommended prefix causes long target names, the following renames the
141 | ## target back to the shorter version for ease of user use
142 | ## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node"
143 | # set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")
144 |
145 | ## Add cmake target dependencies of the executable
146 | ## same as for the library above
147 | # add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
148 |
149 | ## Specify libraries to link a library or executable target against
150 | # target_link_libraries(${PROJECT_NAME}_node
151 | # ${catkin_LIBRARIES}
152 | # )
153 |
154 | #############
155 | ## Install ##
156 | #############
157 |
158 | # all install targets should use catkin DESTINATION variables
159 | # See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html
160 |
161 | ## Mark executable scripts (Python etc.) for installation
162 | ## in contrast to setup.py, you can choose the destination
163 | # install(PROGRAMS
164 | # scripts/my_python_script
165 | # DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
166 | # )
167 |
168 | ## Mark executables and/or libraries for installation
169 | # install(TARGETS ${PROJECT_NAME} ${PROJECT_NAME}_node
170 | # ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
171 | # LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
172 | # RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
173 | # )
174 |
175 | ## Mark cpp header files for installation
176 | # install(DIRECTORY include/${PROJECT_NAME}/
177 | # DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
178 | # FILES_MATCHING PATTERN "*.h"
179 | # PATTERN ".svn" EXCLUDE
180 | # )
181 |
182 | ## Mark other files for installation (e.g. launch and bag files, etc.)
183 | # install(FILES
184 | # # myfile1
185 | # # myfile2
186 | # DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
187 | # )
188 |
189 | #############
190 | ## Testing ##
191 | #############
192 |
193 | ## Add gtest based cpp test target and link libraries
194 | # catkin_add_gtest(${PROJECT_NAME}-test test/test_ggcnn_kinova_grasping.cpp)
195 | # if(TARGET ${PROJECT_NAME}-test)
196 | # target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
197 | # endif()
198 |
199 | ## Add folders to be run by python nosetests
200 | # catkin_add_nosetests(test)
201 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/launch/wrist_camera.launch:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
112 |
113 |
114 |
115 |
116 |
118 |
119 |
126 |
127 |
128 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/package.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 | ggcnn_kinova_grasping
4 | 0.0.1
5 | The ggcnn_kinova_grasping package
6 |
7 |
8 | Doug Morrison
9 |
10 |
11 |
12 | BSD
13 |
14 | catkin
15 | geometry_msgs
16 | roscpp
17 | rospy
18 | std_msgs
19 | geometry_msgs
20 | roscpp
21 | rospy
22 | std_msgs
23 | geometry_msgs
24 | roscpp
25 | rospy
26 | std_msgs
27 |
28 |
29 |
30 |
31 |
32 |
33 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/helpers/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dougsm/ggcnn_kinova_grasping/61f8970f2a190e44c9732d6064cdc72c7a49adaf/ggcnn_kinova_grasping/scripts/helpers/__init__.py
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/helpers/covariance.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from numpy.random import normal as nd
3 |
4 | def generate_cartesian_covariance(sd):
5 | if sd == 0:
6 | return np.identity(3)
7 | return np.array([
8 | [1 + nd(0, sd), nd(0, sd), nd(0, sd)],
9 | [nd(0, sd), 1 + nd(0, sd), nd(0, sd)],
10 | [nd(0, sd), nd(0, sd), 1 + nd(0, sd)]
11 | ])
12 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/helpers/gripper_action_client.py:
--------------------------------------------------------------------------------
1 | # Borrowed and modified from the kinova-ros examples.
2 |
3 | import rospy
4 | import actionlib
5 | import kinova_msgs.msg
6 |
7 | finger_maxTurn = 7400
8 |
9 | def set_finger_positions(finger_positions):
10 | """Send a gripper goal to the action server."""
11 | global gripper_client
12 | global finger_maxTurn
13 |
14 | finger_positions[0] = min(finger_maxTurn, finger_positions[0])
15 | finger_positions[1] = min(finger_maxTurn, finger_positions[1])
16 |
17 | goal = kinova_msgs.msg.SetFingersPositionGoal()
18 | goal.fingers.finger1 = float(finger_positions[0])
19 | goal.fingers.finger2 = float(finger_positions[1])
20 | # The MICO arm has only two fingers, but the same action definition is used
21 | if len(finger_positions) < 3:
22 | goal.fingers.finger3 = 0.0
23 | else:
24 | goal.fingers.finger3 = float(finger_positions[2])
25 | gripper_client.send_goal(goal)
26 | if gripper_client.wait_for_result(rospy.Duration(5.0)):
27 | return gripper_client.get_result()
28 | else:
29 | gripper_client.cancel_all_goals()
30 | rospy.WARN(' the gripper action timed-out')
31 | return None
32 |
33 | action_address = '/m1n6s200_driver/fingers_action/finger_positions'
34 | gripper_client = actionlib.SimpleActionClient(action_address, kinova_msgs.msg.SetFingersPositionAction)
35 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/helpers/position_action_client.py:
--------------------------------------------------------------------------------
1 | # Borrowed and modified from the kinova-ros examples.
2 |
3 | import rospy
4 | import actionlib
5 | import kinova_msgs.msg
6 | import geometry_msgs.msg
7 | import std_msgs.msg
8 |
9 |
10 | def move_to_position(position, orientation):
11 | """Send a cartesian goal to the action server."""
12 | global position_client
13 | if position_client is None:
14 | init()
15 |
16 | goal = kinova_msgs.msg.ArmPoseGoal()
17 | goal.pose.header = std_msgs.msg.Header(frame_id=('m1n6s200_link_base'))
18 | goal.pose.pose.position = geometry_msgs.msg.Point(
19 | x=position[0], y=position[1], z=position[2])
20 | goal.pose.pose.orientation = geometry_msgs.msg.Quaternion(
21 | x=orientation[0], y=orientation[1], z=orientation[2], w=orientation[3])
22 |
23 | position_client.send_goal(goal)
24 |
25 | if position_client.wait_for_result(rospy.Duration(10.0)):
26 | return position_client.get_result()
27 | else:
28 | position_client.cancel_all_goals()
29 | print(' the cartesian action timed-out')
30 | return None
31 |
32 | action_address = '/m1n6s200_driver/pose_action/tool_pose'
33 | position_client = None
34 |
35 | def init():
36 | global position_client
37 | position_client = actionlib.SimpleActionClient(action_address, kinova_msgs.msg.ArmPoseAction)
38 | position_client.wait_for_server()
39 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/helpers/transforms.py:
--------------------------------------------------------------------------------
1 | import rospy
2 | import geometry_msgs.msg as gmsg
3 | import tf2_ros
4 | import tf2_geometry_msgs
5 |
6 | # Lazy create on use (convert_pose) to avoid errors.
7 | tfBuffer = None
8 | listener = None
9 |
10 |
11 | def _init_tf():
12 | # Create buffer and listener
13 | # Something has changed in tf that means this must happen after init_node
14 | global tfBuffer, listener
15 | tfBuffer = tf2_ros.Buffer()
16 | listener = tf2_ros.TransformListener(tfBuffer)
17 |
18 |
19 | def quaternion_to_list(quaternion):
20 | return [quaternion.x, quaternion.y, quaternion.z, quaternion.w]
21 |
22 |
23 | def list_to_quaternion(l):
24 | q = gmsg.Quaternion()
25 | q.x = l[0]
26 | q.y = l[1]
27 | q.z = l[2]
28 | q.w = l[3]
29 | return q
30 |
31 |
32 | def convert_pose(pose, from_frame, to_frame):
33 | """
34 | Convert a pose or transform between frames using tf.
35 | pose -> A geometry_msgs.msg/Pose that defines the robots position and orientation in a reference_frame
36 | from_frame -> A string that defines the original reference_frame of the robot
37 | to_frame -> A string that defines the desired reference_frame of the robot to convert to
38 | """
39 | global tfBuffer, listener
40 |
41 | if tfBuffer is None or listener is None:
42 | _init_tf()
43 |
44 | try:
45 | trans = tfBuffer.lookup_transform(to_frame, from_frame, rospy.Time(0), rospy.Duration(1.0))
46 | except (tf2_ros.LookupException, tf2_ros.ConnectivityException, tf2_ros.ExtrapolationException), e:
47 | print(e)
48 | rospy.logerr('FAILED TO GET TRANSFORM FROM %s to %s' % (to_frame, from_frame))
49 | return None
50 |
51 | spose = gmsg.PoseStamped()
52 | spose.pose = pose
53 | spose.header.stamp = rospy.Time().now
54 | spose.header.frame_id = from_frame
55 |
56 | p2 = tf2_geometry_msgs.do_transform_pose(spose, trans)
57 |
58 | return p2.pose
59 |
60 |
61 | def current_robot_pose(reference_frame, base_frame):
62 | """
63 | Get the current pose of the robot in the given reference frame
64 | reference_frame -> A string that defines the reference_frame that the robots current pose will be defined in
65 | """
66 | # Create Pose
67 | p = gmsg.Pose()
68 | p.orientation.w = 1.0
69 |
70 | # Transforms robots current pose to the base reference frame
71 | return convert_pose(p, base_frame, reference_frame)
72 |
73 |
74 | def publish_stamped_transform(stamped_transform, seconds=1):
75 | """
76 | Publish a stamped transform for debugging purposes.
77 | stamped_transform -> A geometry_msgs/TransformStamped to be published
78 | seconds -> An int32 that defines the duration the transform will be broadcast for
79 | """
80 | # Create broadcast node
81 | br = tf2_ros.TransformBroadcaster()
82 |
83 | # Publish once first.
84 | stamped_transform.header.stamp = rospy.Time.now()
85 | br.sendTransform(stamped_transform)
86 |
87 | # Publish transform for set time.
88 | i = 0
89 | iterations = seconds/0.05
90 | while not rospy.is_shutdown() and i < iterations:
91 | stamped_transform.header.stamp = rospy.Time.now()
92 | br.sendTransform(stamped_transform)
93 | rospy.sleep(0.05)
94 | i += 1
95 |
96 |
97 | def publish_transform(transform, reference_frame, name, seconds=1):
98 | """
99 | Publish a Transform for debugging purposes.
100 | transform -> A geometry_msgs/Transform to be published
101 | reference_frame -> A string defining the reference frame of the transform
102 | seconds -> An int32 that defines the duration the transform will be broadcast for
103 | """
104 | # Create a stamped_transform and store the transform in it
105 | st = gmsg.TransformStamped()
106 | st.transform = transform
107 | st.header.frame_id = reference_frame
108 | st.child_frame_id = name
109 |
110 | # Call the publish_stamped_transform function
111 | publish_stamped_transform(st, seconds)
112 |
113 |
114 | def publish_pose_as_transform(pose, reference_frame, name, seconds=1):
115 | """
116 | Publish a pose as a transform so that it is visualised in rviz.
117 | pose -> A geometry_msgs.msg/Pose to be converted into a transform and published
118 | reference_frame -> A string defining the reference_frame of the pose
119 | name -> A string defining the child frame of the transform
120 | seconds -> An int32 that defines the duration the transform will be broadcast for
121 | """
122 |
123 | # Create a broadcast node and a stamped transform to broadcast
124 | t = gmsg.TransformStamped()
125 |
126 | # Prepare broadcast message
127 | t.header.frame_id = reference_frame
128 | t.child_frame_id = name
129 |
130 | # Copy in pose values to transform
131 | t.transform.translation = pose.position
132 | t.transform.rotation = pose.orientation
133 |
134 | # Call the publish_stamped_transform function
135 | publish_stamped_transform(t, seconds)
136 |
137 |
138 | def publish_tf_quaterion_as_transform(translation, quaternion, reference_frame, name, seconds=1):
139 | qm = gmsg.Transform()
140 | qm.translation.x = translation[0]
141 | qm.translation.y = translation[1]
142 | qm.translation.z = translation[2]
143 | qm.rotation.x = quaternion[0]
144 | qm.rotation.y = quaternion[1]
145 | qm.rotation.z = quaternion[2]
146 | qm.rotation.w = quaternion[3]
147 | publish_transform(qm, reference_frame, name, seconds)
148 |
149 |
150 | def align_pose_orientation_to_frame(from_pose, from_reference_frame, to_reference_frame):
151 | """
152 | Update the orientation of from_pose so that it matches the orientation of to_reference_frame
153 | Useful for aligning a desired position pose with a gripper, for example.
154 | from_pose -> A geometry_msgs.msg/Pose to allign
155 | from_reference_frame -> A string defining the reference_frame of the pose
156 | to_reference_frame -> A string defining the reference_frame to allign to
157 | """
158 | # Create transform
159 | p = gmsg.Pose()
160 | p.orientation.w = 1.0
161 |
162 | # Convert reference frame orientation from -> to
163 | pose_orientation = convert_pose(p, to_reference_frame, from_reference_frame)
164 |
165 | # Copy orientation to pose.
166 | from_pose.orientation = pose_orientation.orientation
167 |
168 | return from_pose
169 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/kinova_closed_loop_grasp.py:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env python
2 |
3 | import rospy
4 |
5 | import numpy as np
6 |
7 | import kinova_msgs.srv
8 | import std_msgs.msg
9 | import geometry_msgs.msg
10 |
11 | import tf.transformations as tft
12 |
13 | from helpers.transforms import current_robot_pose, publish_tf_quaterion_as_transform, convert_pose, publish_pose_as_transform
14 | from helpers.covariance import generate_cartesian_covariance
15 |
16 | from helpers.gripper_action_client import set_finger_positions
17 | from helpers.position_action_client import move_to_position
18 |
19 | MAX_VELO_X = 0.25
20 | MAX_VELO_Y = 0.15
21 | MAX_VELO_Z = 0.085
22 | MAX_ROTATION = 1.5
23 | CURRENT_VELOCITY = [0, 0, 0, 0, 0, 0]
24 | CURRENT_FINGER_VELOCITY = [0, 0, 0]
25 |
26 | MIN_Z = 0.01
27 | CURR_Z = 0.35
28 | CURR_FORCE = 0.0
29 | GOAL_Z = 0.0
30 |
31 | VELO_COV = generate_cartesian_covariance(0)
32 |
33 | GRIP_WIDTH_MM = 70
34 | CURR_DEPTH = 350 # Depth measured from camera.
35 |
36 | SERVO = False
37 |
38 |
39 | class Averager():
40 | def __init__(self, inputs, time_steps):
41 | self.buffer = np.zeros((time_steps, inputs))
42 | self.steps = time_steps
43 | self.curr = 0
44 | self.been_reset = True
45 |
46 | def update(self, v):
47 | if self.steps == 1:
48 | self.buffer = v
49 | return v
50 | self.buffer[self.curr, :] = v
51 | self.curr += 1
52 | if self.been_reset:
53 | self.been_reset = False
54 | while self.curr != 0:
55 | self.update(v)
56 | if self.curr >= self.steps:
57 | self.curr = 0
58 | return self.buffer.mean(axis=0)
59 |
60 | def evaluate(self):
61 | if self.steps == 1:
62 | return self.buffer
63 | return self.buffer.mean(axis=0)
64 |
65 | def reset(self):
66 | self.buffer *= 0
67 | self.curr = 0
68 | self.been_reset = True
69 |
70 |
71 | pose_averager = Averager(4, 3)
72 |
73 |
74 | def command_callback(msg):
75 | global SERVO
76 | global CURR_Z, MIN_Z
77 | global CURR_DEPTH
78 | global pose_averager
79 | global GOAL_Z
80 | global GRIP_WIDTH_MM
81 | global VELO_COV
82 |
83 | CURR_DEPTH = msg.data[5]
84 |
85 | if SERVO:
86 |
87 | d = list(msg.data)
88 |
89 | # PBVS Method.
90 |
91 | if d[2] > 0.150: # Min effective range of the realsense.
92 |
93 | # Convert width in pixels to mm.
94 | # 0.07 is distance from end effector (CURR_Z) to camera.
95 | # 0.1 is approx degrees per pixel for the realsense.
96 | if d[2] > 0.25:
97 | GRIP_WIDTH_PX = msg.data[4]
98 | GRIP_WIDTH_MM = 2 * ((CURR_Z + 0.07)) * np.tan(0.1 * GRIP_WIDTH_PX / 2.0 / 180.0 * np.pi) * 1000
99 |
100 | # Construct the Pose in the frame of the camera.
101 | gp = geometry_msgs.msg.Pose()
102 | gp.position.x = d[0]
103 | gp.position.y = d[1]
104 | gp.position.z = d[2]
105 | q = tft.quaternion_from_euler(0, 0, -1 * d[3])
106 | gp.orientation.x = q[0]
107 | gp.orientation.y = q[1]
108 | gp.orientation.z = q[2]
109 | gp.orientation.w = q[3]
110 |
111 | # Calculate Pose of Grasp in Robot Base Link Frame
112 | # Average over a few predicted poses to help combat noise.
113 | gp_base = convert_pose(gp, 'camera_depth_optical_frame', 'm1n6s200_link_base')
114 | gpbo = gp_base.orientation
115 | e = tft.euler_from_quaternion([gpbo.x, gpbo.y, gpbo.z, gpbo.w])
116 | # Only really care about rotation about z (e[2]).
117 | av = pose_averager.update(np.array([gp_base.position.x, gp_base.position.y, gp_base.position.z, e[2]]))
118 |
119 | else:
120 | gp_base = geometry_msgs.msg.Pose()
121 | av = pose_averager.evaluate()
122 |
123 | # Average pose in base frame.
124 | gp_base.position.x = av[0]
125 | gp_base.position.y = av[1]
126 | gp_base.position.z = av[2]
127 | GOAL_Z = av[2]
128 | ang = av[3] - np.pi/2 # We don't want to align, we want to grip.
129 | q = tft.quaternion_from_euler(np.pi, 0, ang)
130 | gp_base.orientation.x = q[0]
131 | gp_base.orientation.y = q[1]
132 | gp_base.orientation.z = q[2]
133 | gp_base.orientation.w = q[3]
134 |
135 | # Get the Position of the End Effector in the frame fo the Robot base Link
136 | g_pose = geometry_msgs.msg.Pose()
137 | g_pose.position.z = 0.03 # Offset from the end_effector frame to the actual position of the fingers.
138 | g_pose.orientation.w = 1
139 | p_gripper = convert_pose(g_pose, 'm1n6s200_end_effector', 'm1n6s200_link_base')
140 |
141 | publish_pose_as_transform(gp_base, 'm1n6s200_link_base', 'G', 0.0)
142 |
143 | # Calculate Position Error.
144 | dx = (gp_base.position.x - p_gripper.position.x)
145 | dy = (gp_base.position.y - p_gripper.position.y)
146 | dz = (gp_base.position.z - p_gripper.position.z)
147 |
148 | # Orientation velocity control is done in the frame of the gripper,
149 | # so figure out the rotation offset in the end effector frame.
150 | gp_gripper = convert_pose(gp_base, 'm1n6s200_link_base', 'm1n6s200_end_effector')
151 | pgo = gp_gripper.orientation
152 | q1 = [pgo.x, pgo.y, pgo.z, pgo.w]
153 | e = tft.euler_from_quaternion(q1)
154 | dr = 1 * e[0]
155 | dp = 1 * e[1]
156 | dyaw = 1 * e[2]
157 |
158 | vx = max(min(dx * 2.5, MAX_VELO_X), -1.0*MAX_VELO_X)
159 | vy = max(min(dy * 2.5, MAX_VELO_Y), -1.0*MAX_VELO_Y)
160 | vz = max(min(dz - 0.04, MAX_VELO_Z), -1.0*MAX_VELO_Z)
161 |
162 | # Apply a nonlinearity to the velocity
163 | v = np.array([vx, vy, vz])
164 | vc = np.dot(v, VELO_COV)
165 |
166 | CURRENT_VELOCITY[0] = vc[0]
167 | CURRENT_VELOCITY[1] = vc[1]
168 | CURRENT_VELOCITY[2] = vc[2]
169 |
170 | CURRENT_VELOCITY[3] = -1 * dp
171 | CURRENT_VELOCITY[4] = 1 * dr
172 | CURRENT_VELOCITY[5] = max(min(1 * dyaw, MAX_ROTATION), -1 * MAX_ROTATION)
173 |
174 |
175 | def robot_wrench_callback(msg):
176 | # Monitor force on the end effector, with some smoothing.
177 | global CURR_FORCE
178 | CURR_FORCE = 0.5 * msg.wrench.force.z + 0.5 * CURR_FORCE
179 |
180 |
181 | def finger_position_callback(msg):
182 | global SERVO
183 | global CURRENT_FINGER_VELOCITY
184 | global CURR_DEPTH
185 | global CURR_Z
186 | global GRIP_WIDTH_MM
187 |
188 | # Only move the fingers when we're 200mm from the table and servoing.
189 | if CURR_Z < 0.200 and CURR_DEPTH > 80 and SERVO:
190 | # 4000 ~= 70mm
191 | g = min((1 - (min(GRIP_WIDTH_MM, 70)/70)) * (6800-4000) + 4000, 5500)
192 |
193 | # Move fast from fully open.
194 | gain = 2
195 | if CURR_Z > 0.12:
196 | gain = 5
197 |
198 | err = gain * (g - msg.finger1)
199 | CURRENT_FINGER_VELOCITY = [err, err, 0]
200 |
201 | else:
202 | CURRENT_FINGER_VELOCITY = [0, 0, 0]
203 |
204 |
205 | def robot_position_callback(msg):
206 | global SERVO
207 | global CURR_Z
208 | global CURR_DEPTH
209 | global CURR_FORCE
210 | global VELO_COV
211 | global pose_averager
212 | global start_record_srv
213 | global stop_record_srv
214 |
215 | CURR_Z = msg.pose.position.z
216 |
217 | # Stop Conditions.
218 | if CURR_Z < MIN_Z or (CURR_Z - 0.01) < GOAL_Z or CURR_FORCE < -5.0:
219 | if SERVO:
220 | SERVO = False
221 |
222 | # Grip.
223 | rospy.sleep(0.1)
224 | set_finger_positions([8000, 8000])
225 | rospy.sleep(0.5)
226 |
227 | # Move Home.
228 | move_to_position([0, -0.38, 0.35], [0.99, 0, 0, np.sqrt(1-0.99**2)])
229 | rospy.sleep(0.25)
230 |
231 | # stop_record_srv(std_srvs.srv.TriggerRequest())
232 |
233 | raw_input('Press Enter to Complete')
234 |
235 | # Generate a control nonlinearity for this run.
236 | VELO_COV = generate_cartesian_covariance(0.0)
237 |
238 | # Open Fingers
239 | set_finger_positions([0, 0])
240 | rospy.sleep(1.0)
241 |
242 | pose_averager.reset()
243 |
244 | raw_input('Press Enter to Start')
245 |
246 | # start_record_srv(std_srvs.srv.TriggerRequest())
247 | rospy.sleep(0.5)
248 | SERVO = True
249 |
250 |
251 | if __name__ == '__main__':
252 | rospy.init_node('kinova_velocity_control')
253 |
254 | position_sub = rospy.Subscriber('/m1n6s200_driver/out/tool_pose', geometry_msgs.msg.PoseStamped, robot_position_callback, queue_size=1)
255 | finger_sub = rospy.Subscriber('/m1n6s200_driver/out/finger_position', kinova_msgs.msg.FingerPosition, finger_position_callback, queue_size=1)
256 | wrench_sub = rospy.Subscriber('/m1n6s200_driver/out/tool_wrench', geometry_msgs.msg.WrenchStamped, robot_wrench_callback, queue_size=1)
257 | command_sub = rospy.Subscriber('/ggcnn/out/command', std_msgs.msg.Float32MultiArray, command_callback, queue_size=1)
258 |
259 | # https://github.com/dougsm/rosbag_recording_services
260 | # start_record_srv = rospy.ServiceProxy('/data_recording/start_recording', std_srvs.srv.Trigger)
261 | # stop_record_srv = rospy.ServiceProxy('/data_recording/stop_recording', std_srvs.srv.Trigger)
262 |
263 | start_force_srv = rospy.ServiceProxy('/m1n6s200_driver/in/start_force_control', kinova_msgs.srv.Start)
264 | start_force_srv.call(kinova_msgs.srv.StartRequest())
265 |
266 | # Publish velocity at 100Hz.
267 | velo_pub = rospy.Publisher('/m1n6s200_driver/in/cartesian_velocity', kinova_msgs.msg.PoseVelocity, queue_size=1)
268 | finger_pub = rospy.Publisher('/m1n6s200_driver/in/finger_velocity', kinova_msgs.msg.FingerPosition, queue_size=1)
269 | r = rospy.Rate(100)
270 |
271 | move_to_position([0, -0.38, 0.35], [0.99, 0, 0, np.sqrt(1-0.99**2)])
272 | rospy.sleep(0.5)
273 | set_finger_positions([0, 0])
274 | rospy.sleep(0.5)
275 |
276 | SERVO = True
277 |
278 | while not rospy.is_shutdown():
279 | if SERVO:
280 | finger_pub.publish(kinova_msgs.msg.FingerPosition(*CURRENT_FINGER_VELOCITY))
281 | velo_pub.publish(kinova_msgs.msg.PoseVelocity(*CURRENT_VELOCITY))
282 | r.sleep()
283 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/kinova_open_loop_grasp.py:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env python
2 |
3 | import rospy
4 | import tf.transformations as tft
5 |
6 | import numpy as np
7 |
8 | import kinova_msgs.msg
9 | import kinova_msgs.srv
10 | import std_msgs.msg
11 | import std_srvs.srv
12 | import geometry_msgs.msg
13 |
14 | from helpers.gripper_action_client import set_finger_positions
15 | from helpers.position_action_client import position_client, move_to_position
16 | from helpers.transforms import current_robot_pose, publish_tf_quaterion_as_transform, convert_pose, publish_pose_as_transform
17 | from helpers.covariance import generate_cartesian_covariance
18 |
19 | MOVING = False # Flag whether the robot is moving under velocity control.
20 | CURR_Z = 0 # Current end-effector z height.
21 |
22 |
23 | def robot_wrench_callback(msg):
24 | # Monitor wrench to cancel movement on collision.
25 | global MOVING
26 | if MOVING and msg.wrench.force.z < -2.0:
27 | MOVING = False
28 | rospy.logerr('Force Detected. Stopping.')
29 |
30 |
31 | def robot_position_callback(msg):
32 | # Monitor robot position.
33 | global CURR_Z
34 | CURR_Z = msg.pose.position.z
35 |
36 |
37 | def move_to_pose(pose):
38 | # Wrapper for move to position.
39 | p = pose.position
40 | o = pose.orientation
41 | move_to_position([p.x, p.y, p.z], [o.x, o.y, o.z, o.w])
42 |
43 |
44 | def execute_grasp():
45 | # Execute a grasp.
46 | global MOVING
47 | global CURR_Z
48 | global start_force_srv
49 | global stop_force_srv
50 |
51 | # Get the positions.
52 | msg = rospy.wait_for_message('/ggcnn/out/command', std_msgs.msg.Float32MultiArray)
53 | d = list(msg.data)
54 |
55 | # Calculate the gripper width.
56 | grip_width = d[4]
57 | # Convert width in pixels to mm.
58 | # 0.07 is distance from end effector (CURR_Z) to camera.
59 | # 0.1 is approx degrees per pixel for the realsense.
60 | g_width = 2 * ((CURR_Z + 0.07)) * np.tan(0.1 * grip_width / 2.0 / 180.0 * np.pi) * 1000
61 | # Convert into motor positions.
62 | g = min((1 - (min(g_width, 70)/70)) * (6800-4000) + 4000, 5500)
63 | set_finger_positions([g, g])
64 |
65 | rospy.sleep(0.5)
66 |
67 | # Pose of the grasp (position only) in the camera frame.
68 | gp = geometry_msgs.msg.Pose()
69 | gp.position.x = d[0]
70 | gp.position.y = d[1]
71 | gp.position.z = d[2]
72 | gp.orientation.w = 1
73 |
74 | # Convert to base frame, add the angle in (ensures planar grasp, camera isn't guaranteed to be perpendicular).
75 | gp_base = convert_pose(gp, 'camera_depth_optical_frame', 'm1n6s200_link_base')
76 |
77 | q = tft.quaternion_from_euler(np.pi, 0, d[3])
78 | gp_base.orientation.x = q[0]
79 | gp_base.orientation.y = q[1]
80 | gp_base.orientation.z = q[2]
81 | gp_base.orientation.w = q[3]
82 |
83 | publish_pose_as_transform(gp_base, 'm1n6s200_link_base', 'G', 0.5)
84 |
85 | # Offset for initial pose.
86 | initial_offset = 0.20
87 | gp_base.position.z += initial_offset
88 |
89 | # Disable force control, makes the robot more accurate.
90 | stop_force_srv.call(kinova_msgs.srv.StopRequest())
91 |
92 | move_to_pose(gp_base)
93 | rospy.sleep(0.1)
94 |
95 | # Start force control, helps prevent bad collisions.
96 | start_force_srv.call(kinova_msgs.srv.StartRequest())
97 |
98 | rospy.sleep(0.25)
99 |
100 | # Reset the position
101 | gp_base.position.z -= initial_offset
102 |
103 | # Flag to check for collisions.
104 | MOVING = True
105 |
106 | # Generate a nonlinearity for the controller.
107 | cart_cov = generate_cartesian_covariance(0)
108 |
109 | # Move straight down under velocity control.
110 | velo_pub = rospy.Publisher('/m1n6s200_driver/in/cartesian_velocity', kinova_msgs.msg.PoseVelocity, queue_size=1)
111 | while MOVING and CURR_Z - 0.02 > gp_base.position.z:
112 | dz = gp_base.position.z - CURR_Z - 0.03 # Offset by a few cm for the fingertips.
113 | MAX_VELO_Z = 0.08
114 | dz = max(min(dz, MAX_VELO_Z), -1.0*MAX_VELO_Z)
115 |
116 | v = np.array([0, 0, dz])
117 | vc = list(np.dot(v, cart_cov)) + [0, 0, 0]
118 | velo_pub.publish(kinova_msgs.msg.PoseVelocity(*vc))
119 | rospy.sleep(1/100.0)
120 |
121 | MOVING = False
122 |
123 | # close the fingers.
124 | rospy.sleep(0.1)
125 | set_finger_positions([8000, 8000])
126 | rospy.sleep(0.5)
127 |
128 | # Move back up to initial position.
129 | gp_base.position.z += initial_offset
130 | gp_base.orientation.x = 1
131 | gp_base.orientation.y = 0
132 | gp_base.orientation.z = 0
133 | gp_base.orientation.w = 0
134 | move_to_pose(gp_base)
135 |
136 | stop_force_srv.call(kinova_msgs.srv.StopRequest())
137 |
138 | return
139 |
140 |
141 | if __name__ == '__main__':
142 | rospy.init_node('ggcnn_open_loop_grasp')
143 |
144 | # Robot Monitors.
145 | wrench_sub = rospy.Subscriber('/m1n6s200_driver/out/tool_wrench', geometry_msgs.msg.WrenchStamped, robot_wrench_callback, queue_size=1)
146 | position_sub = rospy.Subscriber('/m1n6s200_driver/out/tool_pose', geometry_msgs.msg.PoseStamped, robot_position_callback, queue_size=1)
147 |
148 | # https://github.com/dougsm/rosbag_recording_services
149 | # start_record_srv = rospy.ServiceProxy('/data_recording/start_recording', std_srvs.srv.Trigger)
150 | # stop_record_srv = rospy.ServiceProxy('/data_recording/stop_recording', std_srvs.srv.Trigger)
151 |
152 | # Enable/disable force control.
153 | start_force_srv = rospy.ServiceProxy('/m1n6s200_driver/in/start_force_control', kinova_msgs.srv.Start)
154 | stop_force_srv = rospy.ServiceProxy('/m1n6s200_driver/in/stop_force_control', kinova_msgs.srv.Stop)
155 |
156 | # Home position.
157 | move_to_position([0, -0.38, 0.25], [0.99, 0, 0, np.sqrt(1-0.99**2)])
158 |
159 | while not rospy.is_shutdown():
160 |
161 | rospy.sleep(0.5)
162 | set_finger_positions([0, 0])
163 | rospy.sleep(0.5)
164 |
165 | raw_input('Press Enter to Start.')
166 |
167 | # start_record_srv(std_srvs.srv.TriggerRequest())
168 | rospy.sleep(0.5)
169 | execute_grasp()
170 | move_to_position([0, -0.38, 0.25], [0.99, 0, 0, np.sqrt(1-0.99**2)])
171 | rospy.sleep(0.5)
172 | # stop_record_srv(std_srvs.srv.TriggerRequest())
173 |
174 | raw_input('Press Enter to Complete')
175 |
--------------------------------------------------------------------------------
/ggcnn_kinova_grasping/scripts/run_ggcnn.py:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env python
2 |
3 | import time
4 |
5 | import numpy as np
6 | import tensorflow as tf
7 | from keras.models import load_model
8 |
9 | import cv2
10 | import scipy.ndimage as ndimage
11 | from skimage.draw import circle
12 | from skimage.feature import peak_local_max
13 |
14 | import rospy
15 | from cv_bridge import CvBridge
16 | from geometry_msgs.msg import PoseStamped
17 | from sensor_msgs.msg import Image, CameraInfo
18 | from std_msgs.msg import Float32MultiArray
19 |
20 | bridge = CvBridge()
21 |
22 | # Load the Network.
23 | MODEL_FILE = 'PATH/TO/model.hdf5'
24 | model = load_model(MODEL_FILE)
25 |
26 | rospy.init_node('ggcnn_detection')
27 |
28 | # Output publishers.
29 | grasp_pub = rospy.Publisher('ggcnn/img/grasp', Image, queue_size=1)
30 | grasp_plain_pub = rospy.Publisher('ggcnn/img/grasp_plain', Image, queue_size=1)
31 | depth_pub = rospy.Publisher('ggcnn/img/depth', Image, queue_size=1)
32 | ang_pub = rospy.Publisher('ggcnn/img/ang', Image, queue_size=1)
33 | cmd_pub = rospy.Publisher('ggcnn/out/command', Float32MultiArray, queue_size=1)
34 |
35 | # Initialise some globals.
36 | prev_mp = np.array([150, 150])
37 | ROBOT_Z = 0
38 |
39 | # Tensorflow graph to allow use in callback.
40 | graph = tf.get_default_graph()
41 |
42 | # Get the camera parameters
43 | camera_info_msg = rospy.wait_for_message('/camera/depth/camera_info', CameraInfo)
44 | K = camera_info_msg.K
45 | fx = K[0]
46 | cx = K[2]
47 | fy = K[4]
48 | cy = K[5]
49 |
50 |
51 | # Execution Timing
52 | class TimeIt:
53 | def __init__(self, s):
54 | self.s = s
55 | self.t0 = None
56 | self.t1 = None
57 | self.print_output = False
58 |
59 | def __enter__(self):
60 | self.t0 = time.time()
61 |
62 | def __exit__(self, t, value, traceback):
63 | self.t1 = time.time()
64 | if self.print_output:
65 | print('%s: %s' % (self.s, self.t1 - self.t0))
66 |
67 |
68 | def robot_pos_callback(data):
69 | global ROBOT_Z
70 | ROBOT_Z = data.pose.position.z
71 |
72 |
73 | def depth_callback(depth_message):
74 | global model
75 | global graph
76 | global prev_mp
77 | global ROBOT_Z
78 | global fx, cx, fy, cy
79 |
80 | with TimeIt('Crop'):
81 | depth = bridge.imgmsg_to_cv2(depth_message)
82 |
83 | # Crop a square out of the middle of the depth and resize it to 300*300
84 | crop_size = 400
85 | depth_crop = cv2.resize(depth[(480-crop_size)//2:(480-crop_size)//2+crop_size, (640-crop_size)//2:(640-crop_size)//2+crop_size], (300, 300))
86 |
87 | # Replace nan with 0 for inpainting.
88 | depth_crop = depth_crop.copy()
89 | depth_nan = np.isnan(depth_crop).copy()
90 | depth_crop[depth_nan] = 0
91 |
92 | with TimeIt('Inpaint'):
93 | # open cv inpainting does weird things at the border.
94 | depth_crop = cv2.copyMakeBorder(depth_crop, 1, 1, 1, 1, cv2.BORDER_DEFAULT)
95 |
96 | mask = (depth_crop == 0).astype(np.uint8)
97 | # Scale to keep as float, but has to be in bounds -1:1 to keep opencv happy.
98 | depth_scale = np.abs(depth_crop).max()
99 | depth_crop = depth_crop.astype(np.float32)/depth_scale # Has to be float32, 64 not supported.
100 |
101 | depth_crop = cv2.inpaint(depth_crop, mask, 1, cv2.INPAINT_NS)
102 |
103 | # Back to original size and value range.
104 | depth_crop = depth_crop[1:-1, 1:-1]
105 | depth_crop = depth_crop * depth_scale
106 |
107 | with TimeIt('Calculate Depth'):
108 | # Figure out roughly the depth in mm of the part between the grippers for collision avoidance.
109 | depth_center = depth_crop[100:141, 130:171].flatten()
110 | depth_center.sort()
111 | depth_center = depth_center[:10].mean() * 1000.0
112 |
113 | with TimeIt('Inference'):
114 | # Run it through the network.
115 | depth_crop = np.clip((depth_crop - depth_crop.mean()), -1, 1)
116 | with graph.as_default():
117 | pred_out = model.predict(depth_crop.reshape((1, 300, 300, 1)))
118 |
119 | points_out = pred_out[0].squeeze()
120 | points_out[depth_nan] = 0
121 |
122 | with TimeIt('Trig'):
123 | # Calculate the angle map.
124 | cos_out = pred_out[1].squeeze()
125 | sin_out = pred_out[2].squeeze()
126 | ang_out = np.arctan2(sin_out, cos_out)/2.0
127 |
128 | width_out = pred_out[3].squeeze() * 150.0 # Scaled 0-150:0-1
129 |
130 | with TimeIt('Filter'):
131 | # Filter the outputs.
132 | points_out = ndimage.filters.gaussian_filter(points_out, 5.0) # 3.0
133 | ang_out = ndimage.filters.gaussian_filter(ang_out, 2.0)
134 |
135 | with TimeIt('Control'):
136 | # Calculate the best pose from the camera intrinsics.
137 | maxes = None
138 |
139 | ALWAYS_MAX = False # Use ALWAYS_MAX = True for the open-loop solution.
140 |
141 | if ROBOT_Z > 0.34 or ALWAYS_MAX: # > 0.34 initialises the max tracking when the robot is reset.
142 | # Track the global max.
143 | max_pixel = np.array(np.unravel_index(np.argmax(points_out), points_out.shape))
144 | prev_mp = max_pixel.astype(np.int)
145 | else:
146 | # Calculate a set of local maxes. Choose the one that is closes to the previous one.
147 | maxes = peak_local_max(points_out, min_distance=10, threshold_abs=0.1, num_peaks=3)
148 | if maxes.shape[0] == 0:
149 | return
150 | max_pixel = maxes[np.argmin(np.linalg.norm(maxes - prev_mp, axis=1))]
151 |
152 | # Keep a global copy for next iteration.
153 | prev_mp = (max_pixel * 0.25 + prev_mp * 0.75).astype(np.int)
154 |
155 | ang = ang_out[max_pixel[0], max_pixel[1]]
156 | width = width_out[max_pixel[0], max_pixel[1]]
157 |
158 | # Convert max_pixel back to uncropped/resized image coordinates in order to do the camera transform.
159 | max_pixel = ((np.array(max_pixel) / 300.0 * crop_size) + np.array([(480 - crop_size)//2, (640 - crop_size) // 2]))
160 | max_pixel = np.round(max_pixel).astype(np.int)
161 |
162 | point_depth = depth[max_pixel[0], max_pixel[1]]
163 |
164 | # These magic numbers are my camera intrinsic parameters.
165 | x = (max_pixel[1] - cx)/(fx) * point_depth
166 | y = (max_pixel[0] - cy)/(fy) * point_depth
167 | z = point_depth
168 |
169 | if np.isnan(z):
170 | return
171 |
172 | with TimeIt('Draw'):
173 | # Draw grasp markers on the points_out and publish it. (for visualisation)
174 | grasp_img = np.zeros((300, 300, 3), dtype=np.uint8)
175 | grasp_img[:,:,2] = (points_out * 255.0)
176 |
177 | grasp_img_plain = grasp_img.copy()
178 |
179 | rr, cc = circle(prev_mp[0], prev_mp[1], 5)
180 | grasp_img[rr, cc, 0] = 0
181 | grasp_img[rr, cc, 1] = 255
182 | grasp_img[rr, cc, 2] = 0
183 |
184 | with TimeIt('Publish'):
185 | # Publish the output images (not used for control, only visualisation)
186 | grasp_img = bridge.cv2_to_imgmsg(grasp_img, 'bgr8')
187 | grasp_img.header = depth_message.header
188 | grasp_pub.publish(grasp_img)
189 |
190 | grasp_img_plain = bridge.cv2_to_imgmsg(grasp_img_plain, 'bgr8')
191 | grasp_img_plain.header = depth_message.header
192 | grasp_plain_pub.publish(grasp_img_plain)
193 |
194 | depth_pub.publish(bridge.cv2_to_imgmsg(depth_crop))
195 |
196 | ang_pub.publish(bridge.cv2_to_imgmsg(ang_out))
197 |
198 | # Output the best grasp pose relative to camera.
199 | cmd_msg = Float32MultiArray()
200 | cmd_msg.data = [x, y, z, ang, width, depth_center]
201 | cmd_pub.publish(cmd_msg)
202 |
203 |
204 | depth_sub = rospy.Subscriber('/camera/depth/image_meters', Image, depth_callback, queue_size=1)
205 | robot_pos_sub = rospy.Subscriber('/m1n6s200_driver/out/tool_pose', PoseStamped, robot_pos_callback, queue_size=1)
206 |
207 | while not rospy.is_shutdown():
208 | rospy.spin()
209 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy
2 | scipy
3 | Pillow
4 | scikit-image
5 | tensorflow-gpu
6 | keras
7 | matplotlib
8 | opencv-python
9 | h5py
--------------------------------------------------------------------------------