├── EE 106A Final Project Proposal.pdf ├── README.md ├── ball_tracker ├── CMakeLists.txt ├── package.xml └── src │ ├── ball_tracker.py │ ├── ball_trajectory_regress.py │ └── range_detector.py ├── ball_tracking ├── ball_tracking.py ├── init_test_yellow_ball.mp4 ├── init_test_yellow_ball_output.mp4 └── trajectory_planning.py ├── docs └── index.md └── photos ├── 106a_rqt_tf_tree.png ├── 20171206_202720.jpg ├── 20171206_202836.jpg ├── 20171206_202846.jpg ├── Ball_Arc.jpg ├── Team_Photo.JPG ├── coding.jpg ├── control_diagram.png ├── cv_graphic.png ├── design.jpg ├── detect.jpg ├── end_effector.jpg ├── kinect.jpg ├── kinect2.jpg ├── link.jpg ├── robot_trajectory.gif ├── tracker.jpg ├── velcro.jpg └── velcro_ball.jpg /EE 106A Final Project Proposal.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/EE 106A Final Project Proposal.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Ball Tracker Vision for Kinect2 and Robot Arm 2 | Ball Tracker Vision ROS Package for Kinect2 sensor to track a ball in 3D space. Apply Kalman Filter for ball trajectory planning. Then actuate Robot arm with Model Predictive Control to catch the ball. 3 | 4 | 5 | ### Dependencies 6 | #### Hardware 7 | * kinect2 sensor 8 | 9 | #### Software 10 | * ROS 11 | * OpenKinect/freenect2 12 | * iai_kinect/kinect2_bridge 13 | 14 | --- 15 | 16 | ### Run 17 | (In separate terminal windows) 18 | 19 | * roslaunch kinect2_bridge kinect2_bridge.launch 20 | * rosrun tf static_transform_publisher 0.0 0.0 0.0 0.0 0.0 0.0 1.0 world kinect2_rgb_optical_frame 100 21 | * rosrun ball_tracker ball_tracker.py 22 | * rosrun ball_tracker ball_kalman_filter.py 23 | 24 | #### Check Successful Tracking 25 | * rosrun rqt_tf_tree rqt_tf_tree 26 | * rosrun rviz rviz 27 | * add tf (ball) 28 | * add pointcloud2 (kinect2_rgb(or ir)_optical_frame) 29 | 30 | --- 31 | 32 | #### Features to note for Similar Projects: 33 | * Read from Point Cloud 2 via pc2.read_points() function 34 | * Use message_filters.TimeSynchronizer() to synch different times at same frequency via ROS message_filters. 35 | * Broadcast TF and create TF listener 36 | * Bitmasking and analyze subwindow contours via OpenCV 37 | -------------------------------------------------------------------------------- /ball_tracker/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 2.8.3) 2 | project(ball_tracker) 3 | 4 | ## Compile as C++11, supported in ROS Kinetic and newer 5 | # add_compile_options(-std=c++11) 6 | 7 | ## Find catkin macros and libraries 8 | ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz) 9 | ## is used, also find other catkin packages 10 | find_package(catkin REQUIRED COMPONENTS 11 | geometry_msgs 12 | roscpp 13 | rospy 14 | sensor_msgs 15 | std_msgs 16 | ) 17 | 18 | ## System dependencies are found with CMake's conventions 19 | # find_package(Boost REQUIRED COMPONENTS system) 20 | 21 | 22 | ## Uncomment this if the package has a setup.py. This macro ensures 23 | ## modules and global scripts declared therein get installed 24 | ## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html 25 | # catkin_python_setup() 26 | 27 | ################################################ 28 | ## Declare ROS messages, services and actions ## 29 | ################################################ 30 | 31 | ## To declare and build messages, services or actions from within this 32 | ## package, follow these steps: 33 | ## * Let MSG_DEP_SET be the set of packages whose message types you use in 34 | ## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...). 35 | ## * In the file package.xml: 36 | ## * add a build_depend tag for "message_generation" 37 | ## * add a build_depend and a run_depend tag for each package in MSG_DEP_SET 38 | ## * If MSG_DEP_SET isn't empty the following dependency has been pulled in 39 | ## but can be declared for certainty nonetheless: 40 | ## * add a run_depend tag for "message_runtime" 41 | ## * In this file (CMakeLists.txt): 42 | ## * add "message_generation" and every package in MSG_DEP_SET to 43 | ## find_package(catkin REQUIRED COMPONENTS ...) 44 | ## * add "message_runtime" and every package in MSG_DEP_SET to 45 | ## catkin_package(CATKIN_DEPENDS ...) 46 | ## * uncomment the add_*_files sections below as needed 47 | ## and list every .msg/.srv/.action file to be processed 48 | ## * uncomment the generate_messages entry below 49 | ## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...) 50 | 51 | ## Generate messages in the 'msg' folder 52 | # add_message_files( 53 | # FILES 54 | # Message1.msg 55 | # Message2.msg 56 | # ) 57 | 58 | ## Generate services in the 'srv' folder 59 | # add_service_files( 60 | # FILES 61 | # Service1.srv 62 | # Service2.srv 63 | # ) 64 | 65 | ## Generate actions in the 'action' folder 66 | # add_action_files( 67 | # FILES 68 | # Action1.action 69 | # Action2.action 70 | # ) 71 | 72 | ## Generate added messages and services with any dependencies listed here 73 | # generate_messages( 74 | # DEPENDENCIES 75 | # geometry_msgs# sensor_msgs# std_msgs 76 | # ) 77 | 78 | ################################################ 79 | ## Declare ROS dynamic reconfigure parameters ## 80 | ################################################ 81 | 82 | ## To declare and build dynamic reconfigure parameters within this 83 | ## package, follow these steps: 84 | ## * In the file package.xml: 85 | ## * add a build_depend and a run_depend tag for "dynamic_reconfigure" 86 | ## * In this file (CMakeLists.txt): 87 | ## * add "dynamic_reconfigure" to 88 | ## find_package(catkin REQUIRED COMPONENTS ...) 89 | ## * uncomment the "generate_dynamic_reconfigure_options" section below 90 | ## and list every .cfg file to be processed 91 | 92 | ## Generate dynamic reconfigure parameters in the 'cfg' folder 93 | # generate_dynamic_reconfigure_options( 94 | # cfg/DynReconf1.cfg 95 | # cfg/DynReconf2.cfg 96 | # ) 97 | 98 | ################################### 99 | ## catkin specific configuration ## 100 | ################################### 101 | ## The catkin_package macro generates cmake config files for your package 102 | ## Declare things to be passed to dependent projects 103 | ## INCLUDE_DIRS: uncomment this if you package contains header files 104 | ## LIBRARIES: libraries you create in this project that dependent projects also need 105 | ## CATKIN_DEPENDS: catkin_packages dependent projects also need 106 | ## DEPENDS: system dependencies of this project that dependent projects also need 107 | catkin_package( 108 | # INCLUDE_DIRS include 109 | # LIBRARIES ball_tracker 110 | # CATKIN_DEPENDS geometry_msgs roscpp rospy sensor_msgs std_msgs 111 | # DEPENDS system_lib 112 | ) 113 | 114 | ########### 115 | ## Build ## 116 | ########### 117 | 118 | ## Specify additional locations of header files 119 | ## Your package locations should be listed before other locations 120 | include_directories( 121 | # include 122 | ${catkin_INCLUDE_DIRS} 123 | ) 124 | 125 | ## Declare a C++ library 126 | # add_library(${PROJECT_NAME} 127 | # src/${PROJECT_NAME}/ball_tracker.cpp 128 | # ) 129 | 130 | ## Add cmake target dependencies of the library 131 | ## as an example, code may need to be generated before libraries 132 | ## either from message generation or dynamic reconfigure 133 | # add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) 134 | 135 | ## Declare a C++ executable 136 | ## With catkin_make all packages are built within a single CMake context 137 | ## The recommended prefix ensures that target names across packages don't collide 138 | # add_executable(${PROJECT_NAME}_node src/ball_tracker_node.cpp) 139 | 140 | ## Rename C++ executable without prefix 141 | ## The above recommended prefix causes long target names, the following renames the 142 | ## target back to the shorter version for ease of user use 143 | ## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node" 144 | # set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "") 145 | 146 | ## Add cmake target dependencies of the executable 147 | ## same as for the library above 148 | # add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) 149 | 150 | ## Specify libraries to link a library or executable target against 151 | # target_link_libraries(${PROJECT_NAME}_node 152 | # ${catkin_LIBRARIES} 153 | # ) 154 | 155 | ############# 156 | ## Install ## 157 | ############# 158 | 159 | # all install targets should use catkin DESTINATION variables 160 | # See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html 161 | 162 | ## Mark executable scripts (Python etc.) for installation 163 | ## in contrast to setup.py, you can choose the destination 164 | # install(PROGRAMS 165 | # scripts/my_python_script 166 | # DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} 167 | # ) 168 | 169 | ## Mark executables and/or libraries for installation 170 | # install(TARGETS ${PROJECT_NAME} ${PROJECT_NAME}_node 171 | # ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} 172 | # LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} 173 | # RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} 174 | # ) 175 | 176 | ## Mark cpp header files for installation 177 | # install(DIRECTORY include/${PROJECT_NAME}/ 178 | # DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} 179 | # FILES_MATCHING PATTERN "*.h" 180 | # PATTERN ".svn" EXCLUDE 181 | # ) 182 | 183 | ## Mark other files for installation (e.g. launch and bag files, etc.) 184 | # install(FILES 185 | # # myfile1 186 | # # myfile2 187 | # DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION} 188 | # ) 189 | 190 | ############# 191 | ## Testing ## 192 | ############# 193 | 194 | ## Add gtest based cpp test target and link libraries 195 | # catkin_add_gtest(${PROJECT_NAME}-test test/test_ball_tracker.cpp) 196 | # if(TARGET ${PROJECT_NAME}-test) 197 | # target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME}) 198 | # endif() 199 | 200 | ## Add folders to be run by python nosetests 201 | # catkin_add_nosetests(test) 202 | -------------------------------------------------------------------------------- /ball_tracker/package.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | ball_tracker 4 | 0.0.0 5 | The ball_tracker package 6 | 7 | 8 | 9 | 10 | lxd20 11 | 12 | 13 | 14 | 15 | 16 | TODO 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | catkin 43 | geometry_msgs 44 | roscpp 45 | rospy 46 | sensor_msgs 47 | std_msgs 48 | geometry_msgs 49 | roscpp 50 | rospy 51 | sensor_msgs 52 | std_msgs 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | -------------------------------------------------------------------------------- /ball_tracker/src/ball_tracker.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # import the necessary packages 4 | import rospy 5 | import message_filters 6 | 7 | from sensor_msgs.msg import Image, PointCloud2, CameraInfo 8 | import sensor_msgs.point_cloud2 as pc2 9 | from cv_bridge import CvBridge, CvBridgeError 10 | import tf 11 | 12 | from collections import deque 13 | import numpy as np 14 | import argparse 15 | import cv2 16 | import matplotlib.pyplot as plt 17 | from colorsys import rgb_to_hsv 18 | 19 | 20 | class MaskBallTracker(object): 21 | 22 | def __init__(self): 23 | ### Initialization Parameters for color range for the ball 24 | # (HSV) 25 | self.GREEN_MIN = (45, 47, 47) 26 | self.GREEN_MAX = (90, 255, 255) 27 | 28 | self.PINK_MIN = (140, 98, 85) 29 | self.PINK_MAX = (165, 255, 255) 30 | # (HSV) 31 | self.YELLOW_MIN = (8, 50, 50) 32 | self.YELLOW_MAX = (45, 255, 255) 33 | # HSV for Tennis ball 34 | self.TENNIS_MIN = (30, 255, 135) 35 | self.TENNIS_MAX = (40, 255, 185) 36 | 37 | self.start_path_tracking = True 38 | self.cv_bridge = CvBridge() 39 | self.br = tf.TransformBroadcaster() 40 | self.transform_rotation = tf.transformations.quaternion_from_euler(0, 0, 0) 41 | 42 | 43 | def track_ball_rgb_uv(self, image): 44 | 45 | # print("Got image") 46 | frame = self.cv_bridge.imgmsg_to_cv2(image, "bgr8") 47 | # rospy.loginfo("Converted image to CV2") 48 | # self.show_cv2_image(frame) 49 | 50 | # blur cv2 frame, and convert it to the HSV 51 | # color space 52 | blurred = cv2.GaussianBlur(frame, (11, 11), 0) 53 | hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV) 54 | # self.show_cv2_image(hsv) 55 | 56 | # construct a mask for the desired color, then perform 57 | # a series of dilations and erosions to remove any small 58 | # blobs left in the mask 59 | green_mask = cv2.inRange(hsv, self.GREEN_MIN, self.GREEN_MAX) 60 | green_mask = cv2.erode(green_mask, None, iterations=1) 61 | green_mask = cv2.dilate(green_mask, None, iterations=1) 62 | 63 | pink_mask = cv2.inRange(hsv, self.PINK_MIN, self.PINK_MAX) 64 | pink_mask = cv2.erode(pink_mask, None, iterations=1) 65 | pink_mask = cv2.dilate(pink_mask, None, iterations=1) 66 | 67 | # self.show_cv2_image(mask) 68 | # self.show_cv2_image(pink_mask) 69 | mask = cv2.bitwise_or(green_mask, pink_mask) # Use to combine the two images masks for the given ball 70 | # mask = green_mask 71 | # find contours in the mask and initialize the current 72 | # (x, y) center of the ball 73 | cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, 74 | cv2.CHAIN_APPROX_SIMPLE)[-2] 75 | center = None 76 | 77 | # only proceed if at least one contour was found 78 | if len(cnts) > 0: 79 | # find the largest contour in the mask, then use 80 | # it to compute the minimum enclosing circle and 81 | # centroid 82 | c = max(cnts, key=cv2.contourArea) 83 | ((x, y), radius) = cv2.minEnclosingCircle(c) 84 | M = cv2.moments(c) 85 | center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"])) 86 | 87 | # only proceed if the radius meets a minimum size 88 | if radius > 9: 89 | # draw the circle and centroid on the frame, 90 | # then update the list of tracked points 91 | # cv2.circle(frame, (int(x), int(y)), int(radius), 92 | # (0, 255, 255), 2) 93 | # cv2.circle(frame, center, 5, (0, 0, 255), -1) 94 | 95 | # if not self.start_path_tracking: 96 | # if raw_input("Initialize Path Tracking?: (y/n)") == "y": 97 | # self.start_path_tracking = True 98 | 99 | if self.start_path_tracking: 100 | u, v = center[0], center[1] 101 | 102 | # self.show_cv2_image(frame) 103 | return u, v 104 | 105 | # cleanup the camera and close any open windows 106 | cv2.destroyAllWindows() 107 | 108 | return None, None 109 | 110 | def track_ball_cam_coord_point_cloud(self, rgb_img, point_cloud): 111 | # Convert from rgb+rectified_image u,v to synchronized ir_point cloud X, Y, Z in the camera frame 112 | 113 | u, v = self.track_ball_rgb_uv(rgb_img) 114 | # print(u, v) 115 | 116 | # If the ball (u, v) pixel is found 117 | if u is not None: 118 | # Read point cloud for Camera Frame X, Y, Z for the pixel (u, v) found 119 | cam_x, cam_y, cam_z = self.get_cam_coord_point_cloud(u, v, point_cloud) 120 | if cam_x: 121 | # Broadcast X, Y, Z, timestamp as a TF 122 | self.tf_broadcast(cam_x, cam_y, cam_z, rgb_img.header.stamp) 123 | 124 | def get_cam_coord_point_cloud(self, u, v, point_cloud): 125 | # Read point cloud for Camera Frame X, Y, Z for the pixel (u, v) found 126 | data_out = pc2.read_points(point_cloud, field_names=("x","y","z"), skip_nans=True, uvs=[[u, v]]) 127 | if data_out: 128 | try: 129 | cam_x, cam_y, cam_z = next(data_out) 130 | except Exception: 131 | return None, None, None 132 | return cam_x, cam_y, cam_z 133 | 134 | def publish_ball(self, x, y, z, ts): 135 | # May be used to just publish, X, Y, Z, timestamp geometry msg to a topic 136 | pass 137 | 138 | def tf_broadcast(self, x, y, z, ts): 139 | self.br.sendTransform((x, y, z), self.transform_rotation, ts, "ball", "kinect2_rgb_optical_frame") 140 | 141 | @staticmethod 142 | def show_cv2_image(img): 143 | while True: 144 | cv2.imshow("Image window", img) 145 | if cv2.waitKey(33) == ord("a"): 146 | break 147 | 148 | 149 | 150 | 151 | # Main function used to track ball (ROS Node) 152 | def track_ball(): 153 | 154 | BallTracker = MaskBallTracker() 155 | rospy.init_node("ball_tracker", anonymous= True) 156 | 157 | ### Time Synchronized Messages ### 158 | rgb_image_sub = message_filters.Subscriber("/kinect2/qhd/image_color_rect", Image) 159 | point_cloud_sub = message_filters.Subscriber("/kinect2/qhd/points", PointCloud2) 160 | synch_sub = message_filters.TimeSynchronizer([rgb_image_sub, point_cloud_sub], 8) 161 | synch_sub.registerCallback(BallTracker.track_ball_cam_coord_point_cloud) 162 | 163 | rospy.spin() 164 | 165 | if __name__ == "__main__": 166 | track_ball() -------------------------------------------------------------------------------- /ball_tracker/src/ball_trajectory_regress.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | #The line above tells Linux that this file is a Python script, 3 | #and that the OS should use the Python interpreter in /usr/bin/env 4 | #to run it. Don't forget to use "chmod +x [filename]" to make 5 | #this script executable. 6 | 7 | #Import the dependencies as described in example_pub.py 8 | # import rospy 9 | import numpy as np 10 | import matplotlib.pyplot as plt 11 | from scipy.optimize import fsolve 12 | # from std_msgs.msg import Header 13 | # from geometry_msgs.msg import Pose, Point, Quaternion, Accel, Vector3, AccelStamped 14 | # from geometry_msgs.msg import PointStamped, PoseStamped 15 | # from geometry_msgs.msg import TransformStamped 16 | # import tf 17 | 18 | 19 | class BallTrajectoryPlanner(object): 20 | 21 | def __init__(self): 22 | self.initialize_path_tracking = False 23 | self.seen_point_queue = None 24 | 25 | def reset_point_queue(self): 26 | self.seen_point_queue = None 27 | 28 | def add_seen_point(self, xyz): 29 | if self.seen_point_queue is None: 30 | self.seen_point_queue = np.array(xyz) 31 | else: 32 | self.seen_point_queue = np.append(self.seen_point_queue, xyz, axis=0) 33 | 34 | def lin_reg_xz(self): 35 | t = np.linspace(0, len(self.seen_point_queue), len(self.seen_point_queue)) # timesteps 36 | A = np.vstack([t, np.ones(len(t))]).T 37 | seen_xs = self.seen_point_queue[:,0] 38 | xm, xc = np.linalg.lstsq(A, seen_xs)[0] 39 | # print(xm, xc) 40 | 41 | seen_zs = self.seen_point_queue[:,2] 42 | zm, zc = np.linalg.lstsq(A, seen_zs)[0] 43 | # print(zm, zc) 44 | 45 | return xm, xc, zm, zc 46 | 47 | def lin_reg_y(self): 48 | t = np.linspace(0.0, len(self.seen_point_queue), len(self.seen_point_queue)) # timesteps 49 | A = np.vstack([np.power(t, 2), t, np.ones(len(t))]).T 50 | seen_ys = self.seen_point_queue[:,1] 51 | a, m, c = np.linalg.lstsq(A, seen_ys)[0] 52 | # print(a, m, c) 53 | return a, m, c 54 | 55 | 56 | def solve_sphere_intersection(self, center, radius, init_t): 57 | x2, x1, z2, z1 = self.lin_reg_xz() 58 | y3, y2, y1 = self.lin_reg_y() 59 | c1, c2, c3 = center 60 | r = radius 61 | 62 | func = lambda t: (x1-c1 + x2*t)**2 + (y1-c2 + y2*t + y3*(t**2))**2 + (z1-c3 + z2*t)**2 - r**2 # solves for this function set = 0 63 | t_sol = fsolve(func, init_t) 64 | # print(t_sol) 65 | 66 | x_sol = x2*t_sol + x1 67 | y_sol = (t_sol**2)*y3 + y2*t + y1 68 | z_sol = z2*t_sol + z1 69 | return x_sol, y_sol, z_sol 70 | 71 | 72 | 73 | t = np.linspace(0, 50, 50) # timesteps 74 | g = -9.81 # m/s^2 acceleration due to gravity (only used in the intersection time solver) 75 | # Parameterized by t (time in seconds) 76 | xs = 0.3 * t #+ 4.0 77 | ys = g*pow(t, 2) + 4.0*t + 5.0 78 | zs = 0.2 * t #+ 2.0 79 | 80 | 81 | center = [0, 0, 0] 82 | radius = 48 83 | 84 | # Create a sphere 85 | r = radius 86 | pi = np.pi 87 | cos = np.cos 88 | sin = np.sin 89 | phi, theta = np.mgrid[0.0:pi:100j, 0.0:2.0*pi:100j] 90 | x = r*sin(phi)*cos(theta) + center[0] 91 | y = r*sin(phi)*sin(theta) + center[1] 92 | z = r*cos(phi) + center[2] 93 | 94 | 95 | 96 | 97 | btp = BallTrajectoryPlanner() 98 | btp.add_seen_point([[xs[0], ys[0], zs[0]]]) 99 | pred_xs, pred_ys, pred_zs = [0], [0], [0] 100 | 101 | for i in range(1, len(t)): 102 | btp.add_seen_point([[xs[i], ys[i], zs[i]]]) 103 | # if i < 10: 104 | # xm, xc, zm, zc = btp.lin_reg_xz() 105 | # ya, ym, yc = btp.lin_reg_y() 106 | 107 | x_sol, y_sol, z_sol = btp.solve_sphere_intersection(center, radius, 45) 108 | pred_xs.extend(x_sol) 109 | pred_ys.extend([y_sol[0]]) 110 | pred_zs.extend(z_sol) 111 | 112 | 113 | 114 | from mpl_toolkits.mplot3d import Axes3D 115 | 116 | fig = plt.figure() 117 | ax = fig.gca(projection='3d') 118 | ax.plot_surface(x, y, z, rstride=1, cstride=1, color='c', alpha=0.6, linewidth=0) 119 | ax.plot(xs, zs, ys, label='parametric curve') 120 | ax.legend() 121 | # plt.show() 122 | # from mpl_toolkits.mplot3d import Axes3D 123 | 124 | # fig = plt.figure() 125 | # ax1 = fig.add_subplot(111, projection='3d') 126 | ax.plot(pred_xs, pred_zs, zs=pred_ys, color='r', label='parametric curve') 127 | ax.legend() 128 | plt.show() 129 | 130 | 131 | # def listener(): 132 | # from time import sleep 133 | # sleep(1) 134 | 135 | # rospy.init_node('ball_trajectory_planner', anonymous=True) 136 | 137 | # tf_listener = tf.TransformListener() 138 | # rate = rospy.Rate(30.0) 139 | # sleep(3) 140 | # frame_id = "kinect2_ir_optical_frame" 141 | # # br = tf.TransformBroadcaster() 142 | # # br.sendTransform((0, 0, 0), tf.transformations.quaternion_from_euler(0, 0, 0), 143 | # # rospy.Time(), frame_id, 'kinect_link') 144 | 145 | # base_trans, base_rot = tf_listener.lookupTransform('kinect_link','base_link', rospy.Time()) 146 | # pub = rospy.Publisher('final_pose', PoseStamped, queue_size=10) 147 | # btp = BallTrajectoryPlanner() 148 | 149 | # while not rospy.is_shutdown(): 150 | # try: 151 | # tf_time = tf_listener.getLatestCommonTime(frame_id, "ball") 152 | # ball_trans, ball_rot = tf_listener.lookupTransform(frame_id, "ball", rospy.Time()) 153 | # except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException): 154 | # continue 155 | # btp.predict_location(ball_trans, tf_time, frame_id) 156 | # rate.sleep() # Check if this is necessary 157 | 158 | # #Wait for messages to arrive on the subscribed topics, and exit the node 159 | # #when it is killed with Ctrl+C 160 | # # rospy.spin() 161 | 162 | 163 | # #Python's syntax for a main() method 164 | # if __name__ == '__main__': 165 | # listener() 166 | -------------------------------------------------------------------------------- /ball_tracker/src/range_detector.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | # USAGE: You need to specify a filter and "only one" image source 5 | # 6 | # (python) range-detector --filter RGB --image /path/to/image.png 7 | # or 8 | # (python) range-detector --filter HSV --webcam 9 | 10 | import cv2 11 | import argparse 12 | from operator import xor 13 | 14 | 15 | def callback(value): 16 | pass 17 | 18 | 19 | def setup_trackbars(range_filter): 20 | cv2.namedWindow("Trackbars", 0) 21 | 22 | for i in ["MIN", "MAX"]: 23 | v = 0 if i == "MIN" else 255 24 | 25 | for j in range_filter: 26 | cv2.createTrackbar("%s_%s" % (j, i), "Trackbars", v, 255, callback) 27 | 28 | 29 | def get_arguments(): 30 | ap = argparse.ArgumentParser() 31 | ap.add_argument('-f', '--filter', required=True, 32 | help='Range filter. RGB or HSV') 33 | ap.add_argument('-i', '--image', required=False, 34 | help='Path to the image') 35 | ap.add_argument('-w', '--webcam', required=False, 36 | help='Use webcam', action='store_true') 37 | ap.add_argument('-p', '--preview', required=False, 38 | help='Show a preview of the image after applying the mask', 39 | action='store_true') 40 | args = vars(ap.parse_args()) 41 | 42 | if not xor(bool(args['image']), bool(args['webcam'])): 43 | ap.error("Please specify only one image source") 44 | 45 | if not args['filter'].upper() in ['RGB', 'HSV']: 46 | ap.error("Please speciy a correct filter.") 47 | 48 | return args 49 | 50 | 51 | def get_trackbar_values(range_filter): 52 | values = [] 53 | 54 | for i in ["MIN", "MAX"]: 55 | for j in range_filter: 56 | v = cv2.getTrackbarPos("%s_%s" % (j, i), "Trackbars") 57 | values.append(v) 58 | 59 | return values 60 | 61 | 62 | def main(): 63 | args = get_arguments() 64 | 65 | range_filter = args['filter'].upper() 66 | 67 | if args['image']: 68 | image = cv2.imread(args['image']) 69 | 70 | if range_filter == 'RGB': 71 | frame_to_thresh = image.copy() 72 | else: 73 | frame_to_thresh = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) 74 | else: 75 | camera = cv2.VideoCapture(0) 76 | 77 | setup_trackbars(range_filter) 78 | 79 | while True: 80 | if args['webcam']: 81 | ret, image = camera.read() 82 | 83 | if not ret: 84 | break 85 | 86 | if range_filter == 'RGB': 87 | frame_to_thresh = image.copy() 88 | else: 89 | frame_to_thresh = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) 90 | 91 | v1_min, v2_min, v3_min, v1_max, v2_max, v3_max = get_trackbar_values(range_filter) 92 | 93 | thresh = cv2.inRange(frame_to_thresh, (v1_min, v2_min, v3_min), (v1_max, v2_max, v3_max)) 94 | 95 | if args['preview']: 96 | preview = cv2.bitwise_and(image, image, mask=thresh) 97 | cv2.imshow("Preview", preview) 98 | else: 99 | cv2.imshow("Original", image) 100 | cv2.imshow("Thresh", thresh) 101 | 102 | if cv2.waitKey(1) & 0xFF is ord('q'): 103 | break 104 | 105 | 106 | if __name__ == '__main__': 107 | main() -------------------------------------------------------------------------------- /ball_tracking/ball_tracking.py: -------------------------------------------------------------------------------- 1 | # import the necessary packages 2 | from collections import deque 3 | import numpy as np 4 | import argparse 5 | import imutils 6 | import cv2 7 | import matplotlib.pyplot as plt 8 | from colorsys import rgb_to_hsv 9 | 10 | 11 | class MaskBallTracker(object): 12 | 13 | def __init__(self): 14 | self.path = [] 15 | 16 | ### Initialization Parameters for color range for the ball 17 | # (HSV) 18 | greenLower = (29, 86, 6) 19 | greenUpper = (64, 255, 255) 20 | # (HSV) 21 | YELLOW_MIN = (8, 50, 50) 22 | YELLOW_MAX = (45, 255, 255) 23 | # Tennis Ball (HSV) 24 | TENNIS_NEON_MIN = (30, 255, 135) 25 | TENNIS_NEON_MAX = (40, 255, 185) 26 | 27 | 28 | # initialize the known distance from the camera to the object, which 29 | # in this case is 8 inches 30 | KNOWN_DISTANCE = 8.0 31 | # initialize the known object width, which in this case, the ball has 4 inch radius 32 | KNOWN_WIDTH = 3.0 33 | 34 | 35 | ap = argparse.ArgumentParser() 36 | ap.add_argument("-v", "--video", 37 | help="path to the (optional) video file") 38 | ap.add_argument("-b", "--buffer", type=int, default=128, 39 | help="max buffer size") 40 | args = vars(ap.parse_args()) 41 | 42 | # define the lower and upper boundaries of the "green" 43 | # ball in the HSV color space, then initialize the 44 | # list of tracked points 45 | pts = deque(maxlen=args["buffer"]) 46 | distances = deque(maxlen=args["buffer"]) 47 | 48 | # if a video path was not supplied, grab the reference 49 | # to the webcam 50 | if not args.get("video", False): 51 | camera = cv2.VideoCapture(0) 52 | 53 | # otherwise, grab a reference to the video file 54 | else: 55 | camera = cv2.VideoCapture(args["video"]) 56 | 57 | 58 | ### Calculate Depth using initial calibration for ball size and distance 59 | def distance_to_camera(knownWidth, focalLength, perWidth): 60 | # compute and return the distance from the maker to the camera 61 | return (knownWidth * focalLength) / perWidth 62 | 63 | 64 | start_path_track = False 65 | focalLength = None # Our initial calibration finds the focal length from initial calibration 66 | # keep looping 67 | while True: 68 | # grab the current frame 69 | (grabbed, frame) = camera.read() 70 | original_image = frame 71 | 72 | # if we are viewing a video and we did not grab a frame, 73 | # then we have reached the end of the video 74 | if args.get("video") and not grabbed: 75 | break 76 | 77 | # resize the frame, blur it, and convert it to the HSV 78 | # color space 79 | frame = imutils.resize(frame, width=600) 80 | # plt.imshow(frame) 81 | # plt.show() 82 | # blurred = cv2.GaussianBlur(frame, (11, 11), 0) 83 | hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) 84 | 85 | # construct a mask for the color, then perform 86 | # a series of dilations and erosions to remove any small 87 | # blobs left in the mask 88 | mask = cv2.inRange(hsv, YELLOW_MIN, YELLOW_MAX) 89 | mask = cv2.erode(mask, None, iterations=2) 90 | mask = cv2.dilate(mask, None, iterations=2) 91 | # plt.imshow(mask) 92 | # plt.show() 93 | # find contours in the mask and initialize the current 94 | # (x, y) center of the ball 95 | cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, 96 | cv2.CHAIN_APPROX_SIMPLE)[-2] 97 | center = None 98 | dist = None 99 | 100 | # only proceed if at least one contour was found 101 | if len(cnts) > 0: 102 | # find the largest contour in the mask, then use 103 | # it to compute the minimum enclosing circle and 104 | # centroid 105 | c = max(cnts, key=cv2.contourArea) 106 | ((x, y), radius) = cv2.minEnclosingCircle(c) 107 | M = cv2.moments(c) 108 | center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"])) 109 | 110 | # only proceed if the radius meets a minimum size 111 | if radius > 10: 112 | # draw the circle and centroid on the frame, 113 | # then update the list of tracked points 114 | cv2.circle(frame, (int(x), int(y)), int(radius), 115 | (0, 255, 255), 2) 116 | # cv2.circle(frame, center, 5, (0, 0, 255), -1) 117 | 118 | # Find the distance from the initial calibration 119 | # (15 frames past in test example video) 120 | # if init_ball_not_found == frame_init_wait: 121 | if not focalLength: 122 | init_focal_length = input("Initialize Focal Length Calibration? (y or n): ") 123 | if init_focal_length == "y": 124 | # KNOWN_DISTANCE = float(input("Please enter Ball's Distance from Camera: ")) 125 | # KNOWN_WIDTH = float(input("Please enter Ball radius: ")) 126 | focalLength = (radius * KNOWN_DISTANCE) / KNOWN_WIDTH 127 | # print(focalLength) 128 | # plt.imshow(frame) 129 | # plt.show() 130 | if not start_path_track: 131 | init_path_track = input("Initialize Path Tracking? (y or n)") 132 | if init_path_track == "y": 133 | start_path_track = True 134 | 135 | if focalLength and start_path_track: 136 | Z = distance_to_camera(KNOWN_WIDTH, focalLength, radius) 137 | distances.appendleft(int(Z)) 138 | 139 | # http://www.cse.psu.edu/~rtc12/CSE486/lecture12.pdf 140 | cam_image_x = int(frame.shape[1] - center[0]) # width - x 141 | cam_image_y = int(frame.shape[0] - center[1]) # height - y 142 | 143 | X = (cam_image_x * KNOWN_DISTANCE) / float(focalLength) 144 | Y = (cam_image_y * KNOWN_DISTANCE) / float(focalLength) 145 | 146 | cv2.putText(frame, "({}, {}, {})".format(int(X), int(Y), int(Z)), (int(x), int(y)), 147 | cv2.FONT_HERSHEY_SIMPLEX, 1.0, (255, 255, 255), 1) 148 | 149 | # update the points queue 150 | pts.appendleft(center) 151 | # update the trajectory array to be used for planning estimation testing 152 | # Cast the height for the y coordinate to be in the positive x-y frame 153 | if X and Y and Z: 154 | cam_frame_coord = (X, Y, Z) 155 | self.path.append(cam_frame_coord) 156 | # print(center) # For debugging 157 | 158 | # loop over the set of tracked points 159 | for i in range(1, len(pts)): 160 | # if either of the tracked points are None, ignore 161 | # them 162 | if pts[i - 1] is None or pts[i] is None: 163 | continue 164 | # otherwise, compute the thickness of the line and 165 | # draw the connecting lines 166 | # thickness = int(np.sqrt(args["buffer"] / float(i + 1)) * 2.5) 167 | # cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness) 168 | 169 | # show the frame to our screen 170 | cv2.imshow("Frame", frame) 171 | key = cv2.waitKey(1) & 0xFF 172 | 173 | # if the 'q' key is pressed, stop the loop 174 | if key == ord("q"): 175 | break 176 | 177 | # cleanup the camera and close any open windows 178 | camera.release() 179 | cv2.destroyAllWindows() 180 | 181 | def get_path(self): 182 | return self.path 183 | 184 | tracker = MaskBallTracker() 185 | -------------------------------------------------------------------------------- /ball_tracking/init_test_yellow_ball.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/ball_tracking/init_test_yellow_ball.mp4 -------------------------------------------------------------------------------- /ball_tracking/init_test_yellow_ball_output.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/ball_tracking/init_test_yellow_ball_output.mp4 -------------------------------------------------------------------------------- /ball_tracking/trajectory_planning.py: -------------------------------------------------------------------------------- 1 | from ball_tracking import MaskBallTracker 2 | from math import sqrt 3 | 4 | tracker = MaskBallTracker() 5 | 6 | path = tracker.get_path() # Would normally be path so far when we subscribe to the ROS topic 7 | # Append to the trajectory array and predict again for updated time [t - 1] for final position 8 | 9 | # https://people.physics.tamu.edu/mahapatra/teaching/ch3.pdf 10 | 11 | 12 | """ 13 | Our ROS node will create a Parabolic Trajectory Planner object. 14 | """ 15 | 16 | 17 | 18 | class ParabolicTrajectoryPlanner(object): 19 | 20 | def __init__(self): 21 | self.trajectory = [] # List of (x, y, z) tuples for the current locations 22 | 23 | def add_to_trajectory(self, current_location): 24 | self.trajectory.append(current_location) 25 | 26 | def predict(self, timesteps, trajectory): # predict location of the object based on the trajectory so far after t timesteps. 27 | 28 | if len(trajectory) > 1: 29 | for i in range(1, len(trajectory)): 30 | 31 | xp, yp, zp = trajectory[i - 1] 32 | x, y, z = trajectory[i] 33 | 34 | delta_z = zp - z # Distance changed in time step for the frame 35 | delta_x = xp - x 36 | 37 | delta_h = sqrt(pow(delta_x, 2) + pow(delta_z, 2)) # Projected horizontal movement to be used for the final trajectory plan 38 | # This should be a constant change so our horizonal_final should be timesteps * delta_h 39 | # (* some degradation factor to take into account the full path including earlier timesteps) 40 | 41 | 42 | """ 43 | Then we find the final horizontal displacement after timesteps and breakdown this displacement into 44 | x and z components and adjust x and z in our coordinate frame accordingly. 45 | 46 | We can find y with a couple methods: 47 | 1. Just plugging in the y coords with the equally displaced horizontals into numpy with degree 2 parabola and taking the h = timesteps, y value on the parabola 48 | 2. Using physics equations and gravity acceleration decay to find the y position after timesteps 49 | 3. Measuring the angle from the most recent point(s) and using that to get a current position vector... then im not sure what next. 50 | 51 | For all of these methods, must determine how to use all the x, y, z values and not just the most recent one. 52 | """ 53 | # return predicted_location 54 | 55 | # ParabolicTrajectoryPlanner(path) 56 | -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | ### Introduction 2 | (a) Describe the end goal of your project. 3 | (b) Why is this an interesting project? What interesting problems do you need to solve to make your solution work? 4 | (c) In what real-world robotics applications could the work from your project be useful? 5 | ### Design 6 | (a) What design criteria must your project meet? What is the desired functionality? 7 | (b) Describe the design you chose. 8 | (c) What design choices did you make when you formulated your design? What trade-offs did you have to make? 9 | (d) How do these design choices impact how well the project meets design criteria that would be encountered in a real engineering application, such as robustness, durability, and efficiency? 10 | ### Implementation 11 | (a) Describe any hardware you used or built. Illustrate with pictures and diagrams. 12 | (b) What parts did you use to build your solution? 13 | (c) Describe any software you wrote in detail. Illustrate with diagrams, flow charts, and/or other appropriate visuals. This includes launch files, URDFs, etc. 14 | (d) How does your complete system work? Describe each step. 15 | ### Results 16 | (a) How well did your project work? What tasks did it perform? (b) Illustrate with pictures and at least one video. 17 | Take lots of pictures and videos at every stage of your project, and keep other records of your progress. 18 | ### Conclusion 19 | (a) Discuss your results. How well did your finished solution meet your design criteria? 20 | 2 21 | (b) Did you encounter any particular difficulties? 22 | (c) Does your solution have any flaws or hacks? What improvements would you make if you had additional time? 23 | ### Team 24 | (a) Include names and short bios of each member of your project group. 25 | (b) Describe the major contributions of each team member. 26 | ### Additional materials 27 | (a) code, URDFs, and launch files you wrote (b) CAD models for any hardware you designed 28 | (c) data sheets for components used in your system 29 | (d) any additional videos, images, or data from your finished solution 30 | (e) links to other public sites (e.g., GitHub), if that is where your files are stored 31 | -------------------------------------------------------------------------------- /photos/106a_rqt_tf_tree.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/106a_rqt_tf_tree.png -------------------------------------------------------------------------------- /photos/20171206_202720.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/20171206_202720.jpg -------------------------------------------------------------------------------- /photos/20171206_202836.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/20171206_202836.jpg -------------------------------------------------------------------------------- /photos/20171206_202846.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/20171206_202846.jpg -------------------------------------------------------------------------------- /photos/Ball_Arc.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/Ball_Arc.jpg -------------------------------------------------------------------------------- /photos/Team_Photo.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/Team_Photo.JPG -------------------------------------------------------------------------------- /photos/coding.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/coding.jpg -------------------------------------------------------------------------------- /photos/control_diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/control_diagram.png -------------------------------------------------------------------------------- /photos/cv_graphic.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/cv_graphic.png -------------------------------------------------------------------------------- /photos/design.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/design.jpg -------------------------------------------------------------------------------- /photos/detect.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/detect.jpg -------------------------------------------------------------------------------- /photos/end_effector.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/end_effector.jpg -------------------------------------------------------------------------------- /photos/kinect.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/kinect.jpg -------------------------------------------------------------------------------- /photos/kinect2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/kinect2.jpg -------------------------------------------------------------------------------- /photos/link.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/link.jpg -------------------------------------------------------------------------------- /photos/robot_trajectory.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/robot_trajectory.gif -------------------------------------------------------------------------------- /photos/tracker.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/tracker.jpg -------------------------------------------------------------------------------- /photos/velcro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/velcro.jpg -------------------------------------------------------------------------------- /photos/velcro_ball.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kagrawal2/Robot-Arm-Vision-Tracking/8e5521494335a165751a20ff6995b07095fbdcf8/photos/velcro_ball.jpg --------------------------------------------------------------------------------