├── .gitignore ├── README.md ├── getInvKin.py ├── images ├── 2_R.png ├── PosW.png ├── eq_w.png ├── firstJoint.png ├── matrixT.png ├── parts.png ├── phi.png ├── theta_2.png ├── theta_3.png ├── tray.png └── wsplot.png ├── invKin.py ├── invKinPhantom.m ├── pickplace.py ├── triangle_2R.png ├── workspace.asv └── workspace.m /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/getInvKin.cpython-38.pyc 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Lab 3: Inverse Kinematics with Phantom X 2 | 3 | This repository contains MATLAB and Python scripts to develop a model of inverse kinematics for the robot Phantom X. This requieres the definition of the robot's workspace and the implementation of a function to calculate the inverse kinematics given an specific position and orientation for the end effector. To check the proper operation of the inverse kinematics, an application of pick and place and motion in the robot's workspace was carried out. 4 | 5 | ## Authors 6 | 7 | * Maria Alejandra Arias Frontanilla 8 | * Camilo Andres Vera Ruiz 9 | 10 | ## Inverse Kinematics of Phantom X 11 | 12 | In previous labs, we have already seen how with a certain configuration _q_, which gives us the corresponding values of the joints, we obtain a certain position and orientation of the end effector. That is what we call forward kinematics. That was pretty good and interesting but now it gets even better and more sophisticated because we are going to do the inverse process. Instead of choosing the values of the joints, we will now choose a certain position and orientation for the end effector and then we will figure out which values of joints, i.e. which configuration, we must have to achieve it. 13 | 14 | As you can imagine, this process is a little bit more complicated than the one of forward kinematics, but don't worry, we will understand it step by step. 15 | 16 | ### First Joint (The easiest one) 17 | 18 | Ok! First things first. Let's begin with the first joint. In order to calculate the value of each one of the joints, it's important to know the kind of information that we already have. In our case, we know the position and orientation of the end effector which can be given in a MTH of the tool like the following one: 19 | 20 | ![Matrix tool](images/matrixT.png) 21 | 22 | Having the position in terms of _xc, yc_ and _zc_, it's easy to find the value for the first joint. 23 | 24 | ![im1](images/firstJoint.png) 25 | 26 | As we can see in the image above, the value of θ1 for the first joint can be calculated as follows: 27 | 28 | θ1 = atan2(yc, xc) 29 | 30 | Great! We already have the first joint calculated. Let's go for the other 3. 31 | 32 | ### Kinematic Decoupling 33 | 34 | For us to calculate the second and third joint, we will use something called kinematic decoupling in the wrist of our Phantom X. Wait a moment... Does the Phantom have a wrist? Yes! It does. And it has an elbow too. 35 | 36 | The wrist is located in the fourth joint and we will have to specify which position corresponds to it. 37 | 38 | With the MTH of the tool we have access to the values of the vector _|a|_ which in the previous image corresponds to the axis z on the tool tip. Our wrist is located close to the end point (_xc, yc, zc_) but with a negative displacement of value _L4_ (which corresponds to the length of the fourth link)in the same direction as the vector |a|. And that's exactly the way we will calculate it. 39 | 40 | ![im2](images/PosW.png) 41 | 42 | The position of W, Phantom X's wrist, is then calculated as follows: 43 | 44 | ![im3](images/eq_w.png) 45 | 46 | At this point you may be wondering why we calculated _W_. Well, as everything in life, this also has a purpose and it's to allow us to find the values of the second and third joint seeing it as a 2R mechanism. 47 | 48 | ### Finding Second and Third Joint seeing it as a 2R 49 | 50 | Don't you see it yet? No problem. The image below will help you to identify with ease what we mean with a 2R mechanism 51 | 52 | ![im4](images/2_R.png) 53 | 54 | If we take just the second and third joint with the end effector as the wrist, we have now a much simpler problem which can be solved using trigonometry. 55 | 56 | With the triangle that is formed and using the cosine law and getting help of our old dear friend Pythagoras, we get to the following values for θ2 and θ3, corresponding to the second and third joint, respectively. 57 | 58 | ![im6](triangle_2R.png) 59 | 60 | The values for θ2 and θ3 are shown below. 61 | 62 | ![im7](images/theta_2.png) 63 | 64 | ![im8](images/theta_3.png) 65 | 66 | ### Fourth Joint (we are almost done) 67 | 68 | Now we are only missing the fourth joint, the last one. In this case we will use again our vector _|a|_. We will call φ the angle between _|a|_ and the plain formed by axes _x_ and _y_. If we see it closely, we can see that φ is equal to the sum of the angles θ2, θ3 and θ4. In this way we can find θ4 as follows: 69 | 70 | θ4 = φ - θ2 - θ3 71 | 72 | Now, we just have to figure out how to measure that φ. 73 | 74 | Since we have all the components of our vector _|a|_, we can calculate φ using again Pythagoras. 75 | 76 | And we are finally done :D 77 | 78 | This whole process of inverse kinematics is implemented in MATLAB in the file _invKinPhantom.m_ and in Python _getInvKin.py_. 79 | 80 | __Note__: The values of θ2 and θ4 are not directly the values that correspond to _q2_ and _q4_ since in the DH Parameters exists an offset that we have to take into account. 81 | 82 | ## Workspace of Phantom X and matlab inverse kinematics 83 | 84 | A robot workspace is defined as the reachable space for the robot tool, and calculate it by an analytically method can be very difficult, for that reason a numeric method is used bellow using MATLAB and Peter corke toolbox. 85 | 86 | A Monte Carlo method approach is used to do the calculation, the main idea is to get the position of the tool for random positions of the articulations using forward kinematics with the peter corke toolbox. The code showed bellow is used to create the robot instance with the DH parameters and the MTH for the tool using NOA convention. 87 | 88 | ```matlab 89 | % Robot definition 90 | l = [14.5, 10.63, 10.65, 8.97]; % Length of links 91 | L(1) = Link('revolute','alpha',pi/2,'a',0, 'd',l(1),'offset',0, 'qlim',[-3*pi/4 3*pi/4]); 92 | L(2) = Link('revolute','alpha',0, 'a',l(2),'d',0, 'offset',pi/2,'qlim',[-3*pi/4 3*pi/4]); 93 | L(3) = Link('revolute','alpha',0, 'a',l(3),'d',0, 'offset',0, 'qlim',[-3*pi/4 3*pi/4]); 94 | L(4) = Link('revolute','alpha',0, 'a',0, 'd',0, 'offset',0, 'qlim',[-3*pi/4 3*pi/4]); 95 | PhantomX = SerialLink(L,'name','Px'); 96 | 97 | %Tool orientation following NAO Convention 98 | PhantomX.tool = [0 0 1 l(4); -1 0 0 0; 0 -1 0 0; 0 0 0 1]; 99 | ``` 100 | 101 | The next step is to generate 4 vectors with random position values for each joint (within the limits of the joints), and calculate all the final positions for each value set by forward kinematics, and save the x,y,z coordinates from the the MTH into vectors. 102 | 103 | Finally MATLAB boundary function is used to get the specific boundary points that contain the rest and plot them in a 3d representation. The plot and the code to achieve that is showed bellow. 104 | 105 | ```matlab 106 | %% 3d workspace 107 | rng('default') 108 | samples = 1000000; 109 | q1 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 110 | q2 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 111 | q3 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 112 | q4 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 113 | 114 | x_pos = zeros(samples,1); 115 | y_pos = zeros(samples,1); 116 | z_pos = zeros(samples,1); 117 | disp(x_pos(1)) 118 | 119 | for i = 1:1:samples 120 | MTH = PhantomX.fkine([q1(i) q2(i) q3(i) q4(i)]); 121 | x_pos(i) = MTH(1,4); 122 | y_pos(i) = MTH(2,4); 123 | z_pos(i) = MTH(3,4); 124 | end 125 | 126 | %% 3d boundary 127 | bound=boundary(x_pos,y_pos,z_pos); 128 | trisurf(bound,x_pos,y_pos,z_pos) 129 | ``` 130 | 131 | ![im9](images/wsplot.png) 132 | 133 | The plot as expected showed some kind of sphere with a little bit of deformation at the bottom due to the joint range limited to a range less than 360 degrees. 134 | 135 | Another way to do inverse kinematics is by using Peter corke toolbox, in the [documentation](https://www.petercorke.com/RTB/r9/html/SerialLink.html) some methods shown below are explained. 136 | 137 | 1. `kine6s`: inverse kinematics for 6-axis spherical wrist revolute robot 138 | 2. `ikine`: inverse kinematics using iterative numerical method 139 | 3. `ikunc`: inverse kinematics using optimization 140 | 4. `ikcon`: inverse kinematics using optimization with joint limits 141 | 5. `ikine_sym`: analytic inverse kinematics obtained symbolically 142 | 143 | This methods relies more in a numerical method approach instead of an analytical one making them more suitable to a different robots, instead of having to manually analyze every robot to be able to do inverse kinematics. 144 | 145 | ## Analysis questions 146 | 147 | _Sabiendo que el robot Phantom X posee 4 GDL, de los cuales 3 corresponden a posición, el GDL restante proporciona una medida independiente para un ángulo de orientación (asuma orientación en ángulos fijos). ¿De qué ángulo de orientación se trata?_ 148 | 149 | The Phantom X robot, is only capable of move _pitch_ orientation because all links are aligned and share the same plane in all configurations, this can be verified looking the python program in the section `Motion in robot workspace`. 150 | 151 | _¿Cuántas soluciones posibles existen para la cinemática inversa del manipulador Phantom X?_ 152 | 153 | The analysis showed before about inverse kinematics, identifies two posible solution corresponding to up-elbow and down-elbow. 154 | 155 | _Consulte en qué consiste el espacio diestro de un manipulador._ 156 | 157 | The configuration space is the subregion of the robot workspace that is reachable by all the orientations controlable by the robot, that is for 6DoF anthropomorphic robot all orientations, and for PhantomX, pitch orientation. 158 | 159 | ## Pick and Place Application 160 | 161 | To put in practice inverse kinematics a pick and place application is developed. 162 | The idea is to move the 3d printed parts showed bellow. The two start at a different position, the one is moved to a defined position, and then the other one is inserted in the first one. 163 | 164 | ![im20](images/parts.png) 165 | 166 | First we prepare all the required libraries, in this case we use the peter corke toolbox for python to generate MTH at each position, plus ROS to communicate with the robot and numpy to do all the math. A custom library develop by us called _getInvKin_ is used to calculate inverse kinematics. 167 | 168 | ```python 169 | #!/usr/bin/env python3 170 | 171 | import numpy as np 172 | import time 173 | 174 | # ROS libraries 175 | import rospy 176 | from std_msgs.msg import String 177 | from dynamixel_workbench_msgs.srv import DynamixelCommand 178 | 179 | 180 | #import petercorke toolbox for visualization 181 | from roboticstoolbox import * 182 | from spatialmath.pose3d import * 183 | 184 | #inverse kinematics library 185 | from getInvKin import * 186 | ``` 187 | 188 | The next two functions used in the previous labs are used too, `deg2raw` allow us to convert an array of joints positions in degrees to 12 bits values from 0 to 1023 that are used to control the motors. `jointCommand` is used to send data to the robot using a ROS service. 189 | 190 | ```python 191 | def deg2raw(input_list: list = [0,0,0,0], min_deg: int = -150, max_deg: int = 150)->list: 192 | """ 193 | Convert degrees array to 10 bit motor control value. 194 | """ 195 | 196 | out_list = [0,0,0,0] 197 | for i in range(len(input_list)): 198 | out_list[i] = int( ((input_list[i] - min_deg)*1024)/(max_deg-min_deg) ) 199 | return out_list 200 | 201 | 202 | 203 | def jointCommand(command, id_num, addr_name, value, time): 204 | 205 | """ 206 | Make a request to a the "dynamixel_command" ros service to modify a 207 | parameter such as position or torque of the motor specified by the "id_num" 208 | parameter. 209 | """ 210 | rospy.wait_for_service('dynamixel_workbench/dynamixel_command') 211 | try: 212 | dynamixel_command = rospy.ServiceProxy('/dynamixel_workbench/dynamixel_command', DynamixelCommand) 213 | result = dynamixel_command(command,id_num,addr_name,value) 214 | rospy.sleep(time) 215 | return result.comm_result 216 | except rospy.ServiceException as exc: 217 | print(str(exc)) 218 | ``` 219 | 220 | Three new functions are created, the first one `getQ` takes x, y, z position and a pitch orientation, then a MTH is generated using peter corke toolbox, to finally use the inverse kinematic library to get a vector of joints angles at up-elbow configuration. 221 | 222 | ```python 223 | 224 | def getQ(x:float=4, y:float=4, z:float=5, phi:float=-np.pi)->np.array: 225 | """ 226 | Generates the MTH matrix to execute inverse kinematics and returns 227 | an array with the joint position. 228 | """ 229 | 230 | MTH = SE3(x,y,z)*SE3.Ry(phi) 231 | print(MTH) 232 | q = getInvKin(MTH.data[0], phantomX_l) 233 | return q[1] 234 | ``` 235 | `setFullPostion` is used to send a full array of joints positions to the robot, first converting them 12 bit values and then using the `jointCommand` function to send all the data. 236 | 237 | ```python 238 | def setFullPostion(q:np.array = np.array([0, 0, 0, 0]) )->None: 239 | """ 240 | Set all the joints of the robot to the position specified by the input array 241 | """ 242 | #phantomX.plot(q) 243 | q_raw=deg2raw(np.degrees(q)) 244 | print(q_raw) 245 | jointCommand('', motors_ids[0], 'Goal_Position', q_raw[0], 0) 246 | jointCommand('', motors_ids[1], 'Goal_Position', q_raw[1], 0) 247 | jointCommand('', motors_ids[2], 'Goal_Position', q_raw[2], 0) 248 | jointCommand('', motors_ids[3], 'Goal_Position', q_raw[3], 0) 249 | ``` 250 | 251 | `setGripperPosition` sends a specific 12 bit value to the gripper to allow grab the parts. 252 | 253 | ```python 254 | def setGripperPosition(servoValue:int=0)->None: 255 | """ 256 | Set the gripper position to the specified 0 to 1023 value 257 | """ 258 | 259 | jointCommand('', motors_ids[4], 'Goal_Position', servoValue, 0) 260 | ``` 261 | 262 | To do the the movement first we need to define a trayectory. the image bellow describe the main idea, the robot first move to the base part location and put it in the location at the end of the fourth movement, and then moves the circular part from its position at the end of the sixth movement to finally iserting over the base part. 263 | 264 | ![im21](images/tray.png) 265 | 266 | With the robotics knowledge at this point, the only way to do that is define some intermediate points and separate the trayectory in steps, 267 | taking into account that is not posible to garantee the position of the robot between the points, making the adjusment of the number points critical. 268 | 269 | Resume the coding, the main function define some parameters to manually adjust the trayectory, and meeting precision requirements by increasing the resolution of the movement, plus the proper gripper values to be able to grab the parts. 270 | 271 | ```python 272 | 273 | def main(): 274 | 275 | # Movement parameters 276 | home_x_position = 10 277 | home_y_position = 0 278 | 279 | home_to_base_x_postion = 10 280 | home_to_base_y_postion = -12 281 | 282 | base_x_position = 0 283 | base_y_position =-12 284 | 285 | home_to_load_x_postion = 10 286 | home_to_load_y_postion = 12 287 | 288 | load_x_position = 0 289 | load_y_position = 12 290 | 291 | free_movement_safe_height = 12 292 | base_gripper_height = 4 293 | load_gripper_height = 4.5 294 | load_over_base_height = 7 295 | 296 | # Gripper values 297 | base_gripper_closed_value = 348 298 | load_gripper_closed_value = 348 299 | gripper_open_value = 512 300 | 301 | 302 | # Number of steps of each movement phase 303 | home_to_base_x_steps = 10 304 | home_to_base_y_steps = 10 305 | home_to_load_x_steps = 10 306 | home_to_load_y_steps = 10 307 | 308 | pick_up_base_down_steps = 20 309 | pick_up_base_up_steps = 20 310 | place_base_down_steps = 20 311 | home_up_steps = 20 312 | pick_up_load_down_steps = 20 313 | pick_up_load_up_steps = 20 314 | place_load_down_steps = 20 315 | 316 | ``` 317 | 318 | Here comes the long complex part, due to the necesity of several parameter to properly adjust the movement, and the limitation of python to be an interpreted programing language slow at runtime, each step of the movement must be precalculated to avoid calculation during the movement and make the robot slow (This is more caution rather that a proven fact). 319 | 320 | All the calculation of each step of the movement are showed in the code bellow, all of them are linear movements intepolated by a number of defined steps in the parameter list. 321 | 322 | ```python 323 | # Trajectory generation: 324 | 325 | 326 | q_vector1 = np.zeros((1 + pick_up_base_down_steps+home_to_base_y_steps+home_to_base_x_steps,4)) 327 | q_vector2 = np.zeros((pick_up_base_up_steps + place_base_down_steps+home_to_base_x_steps+home_to_base_y_steps,4)) 328 | q_vector3 = np.zeros((home_up_steps + pick_up_load_down_steps+home_to_load_y_steps+home_to_load_x_steps,4)) 329 | q_vector4 = np.zeros((pick_up_load_up_steps +home_to_load_x_steps+home_to_load_y_steps+ place_load_down_steps,4)) 330 | 331 | # 1-homming 332 | q_vector1[0] = getQ(x = home_x_position, 333 | y = home_y_position, 334 | z = free_movement_safe_height 335 | ) 336 | 337 | # 1-home->base_position 338 | 339 | for i in range(home_to_base_y_steps): 340 | q_vector1[i+1] = getQ(x=home_x_position, 341 | y=home_y_position+ 342 | (((home_to_base_y_postion-home_y_position) 343 | *(i+1))/home_to_base_y_steps), 344 | z = free_movement_safe_height 345 | ) 346 | for i in range(home_to_base_x_steps): 347 | q_vector1[i+home_to_base_y_steps+1] = getQ(x=home_to_base_x_postion+ 348 | (((base_x_position-home_to_base_x_postion) 349 | *(i+1))/home_to_base_x_steps), 350 | y = home_to_base_y_postion, 351 | z = free_movement_safe_height 352 | ) 353 | # 2 pick-up-base_down 354 | 355 | for i in range(pick_up_base_down_steps): 356 | q_vector1[i+home_to_base_y_steps + home_to_base_x_steps+1] = getQ(x=base_x_position, 357 | y=base_y_position, 358 | z=free_movement_safe_height- 359 | (((free_movement_safe_height-base_gripper_height) 360 | *(i+1))/pick_up_base_down_steps) 361 | ) 362 | # 3 pick-up-base_up 363 | 364 | for i in range(pick_up_base_up_steps): 365 | q_vector2[i] = getQ(x=base_x_position, 366 | y=base_y_position, 367 | z=base_gripper_height+ 368 | (((free_movement_safe_height-base_gripper_height) 369 | *(i+1))/pick_up_base_up_steps) 370 | ) 371 | 372 | # 4-base_position->home 373 | 374 | for i in range(home_to_base_x_steps): 375 | q_vector2[i+pick_up_base_up_steps] = getQ(x=base_x_position+ 376 | (((home_to_base_x_postion-base_x_position) 377 | *(i+1))/home_to_base_x_steps), 378 | y = base_y_position, 379 | z = free_movement_safe_height 380 | ) 381 | for i in range(home_to_base_y_steps): 382 | q_vector2[pick_up_base_up_steps+home_to_base_x_steps+i] = getQ(x=home_to_base_x_postion, 383 | y=home_to_base_y_postion+ 384 | (((home_y_position-home_to_base_y_postion) 385 | *(i+1))/home_to_base_y_steps), 386 | z = free_movement_safe_height 387 | ) 388 | # 5-place-base_down 389 | 390 | for i in range(place_base_down_steps): 391 | q_vector2[i+pick_up_base_up_steps+home_to_base_x_steps+home_to_base_x_steps] = getQ(x=home_x_position, 392 | y=home_y_position, 393 | z=free_movement_safe_height- 394 | (((free_movement_safe_height-base_gripper_height) 395 | *(i+1))/place_base_down_steps) 396 | ) 397 | # 6-home-up 398 | 399 | for i in range(home_up_steps): 400 | q_vector3[i] = getQ(x=home_x_position, 401 | y=home_y_position, 402 | z=base_gripper_height+ 403 | (((free_movement_safe_height-base_gripper_height) 404 | *(i+1))/home_up_steps) 405 | ) 406 | 407 | # 7-home->load 408 | 409 | for i in range(home_to_load_y_steps): 410 | q_vector3[home_up_steps+i] = getQ(x=home_x_position, 411 | y=home_y_position+ 412 | (((home_to_load_y_postion-home_y_position) 413 | *(i+1))/home_to_load_y_steps), 414 | z = free_movement_safe_height 415 | ) 416 | for i in range(home_to_base_x_steps): 417 | q_vector3[i+home_to_load_y_steps+home_up_steps] = getQ(x=home_to_load_x_postion+ 418 | (((load_x_position-home_to_load_x_postion) 419 | *(i+1))/home_to_base_x_steps), 420 | y = home_to_load_y_postion, 421 | z = free_movement_safe_height 422 | ) 423 | 424 | # 8-pick-up-load_down 425 | for i in range(pick_up_load_down_steps): 426 | q_vector3[i+home_to_load_y_steps+home_to_base_x_steps+home_up_steps] = getQ(x=load_x_position, 427 | y=load_y_position, 428 | z=free_movement_safe_height- 429 | (((free_movement_safe_height-load_gripper_height) 430 | *(i+1))/pick_up_load_down_steps) 431 | ) 432 | 433 | #9-pick-up-load_up 434 | for i in range(pick_up_load_up_steps): 435 | q_vector4[i] = getQ(x=load_x_position, 436 | y=load_y_position, 437 | z=load_gripper_height+ 438 | (((free_movement_safe_height-load_gripper_height) 439 | *(i+1))/pick_up_load_up_steps) 440 | ) 441 | 442 | #10 load->home 443 | 444 | for i in range(home_to_load_x_steps): 445 | q_vector4[i+pick_up_load_up_steps] = getQ(x=load_x_position+ 446 | (((home_to_load_x_postion-load_x_position) 447 | *(i+1))/home_to_load_x_steps), 448 | y = load_y_position, 449 | z = free_movement_safe_height 450 | ) 451 | for i in range(home_to_load_y_steps): 452 | q_vector4[pick_up_load_up_steps+home_to_load_x_steps+i] = getQ(x=home_to_load_x_postion, 453 | y=home_to_load_y_postion+ 454 | (((home_y_position-home_to_load_y_postion) 455 | *(i+1))/home_to_load_y_steps), 456 | z = free_movement_safe_height 457 | ) 458 | 459 | 460 | #11 place-load-down 461 | 462 | for i in range(place_load_down_steps): 463 | q_vector4[i+pick_up_base_up_steps+home_to_load_x_steps+home_to_load_y_steps] = getQ(x=home_x_position, 464 | y=home_y_position, 465 | z=free_movement_safe_height- 466 | (((free_movement_safe_height-load_over_base_height) 467 | *(i+1))/place_load_down_steps) 468 | ) 469 | 470 | ``` 471 | 472 | finally all the points calculated are sent to the robot to execute the movement, adding a short pause between steps to make the robot more stable due to vibration generated assembly looseness. 473 | 474 | ```python 475 | 476 | # Motors configuration 477 | 478 | jointCommand('', motors_ids[0], 'Torque_Limit', 300, 0) 479 | jointCommand('', motors_ids[1], 'Torque_Limit', 500, 0) 480 | jointCommand('', motors_ids[2], 'Torque_Limit', 300, 0) 481 | jointCommand('', motors_ids[3], 'Torque_Limit', 300, 0) 482 | jointCommand('', motors_ids[4], 'Torque_Limit', 300, 0) 483 | 484 | jointCommand('', motors_ids[0], 'Torque_Enable', 1, 0) 485 | jointCommand('', motors_ids[1], 'Torque_Enable', 1, 0) 486 | jointCommand('', motors_ids[2], 'Torque_Enable', 1, 0) 487 | jointCommand('', motors_ids[3], 'Torque_Enable', 1, 0) 488 | jointCommand('', motors_ids[4], 'Torque_Enable', 1, 0) 489 | 490 | # Initial gripper open position 491 | setGripperPosition(gripper_open_value) 492 | 493 | # Movement 494 | time.sleep(1) 495 | for i in q_vector1: 496 | setFullPostion(i) 497 | setGripperPosition(base_gripper_closed_value) 498 | 499 | time.sleep(0.8) # Stabnilization delay 500 | for i in q_vector2: 501 | setFullPostion(i) 502 | setGripperPosition(gripper_open_value) 503 | time.sleep(0.8) 504 | for i in q_vector3: 505 | setFullPostion(i) 506 | setGripperPosition(load_gripper_closed_value) 507 | time.sleep(0.8) 508 | for i in q_vector4: 509 | setFullPostion(i) 510 | setGripperPosition(gripper_open_value) 511 | time.sleep(0.5) 512 | ``` 513 | 514 | The result can be found [here](https://www.youtube.com/watch?v=NGFBQhBNVEc). 515 | 516 | ## Motion in robot's workspace 517 | 518 | The last application that we developed for this lab relates to the whole robot's workspace. Using keyboard inputs, we will control the position of the end effector. 519 | 520 | To create the movements that will allows us to control the PhantomX, the class _Movement_ was created in the file _invKin.py_. 521 | 522 | ```python 523 | class Movement: 524 | def __init__(self, name , step): 525 | self.name = name 526 | self.step = step 527 | ``` 528 | 529 | In this class we define the name of the movement and the step, which refers to the change of value that we will have. 530 | 531 | For our application we have 4 different movements: 532 | 533 | * TRAX = Axis-x translation with a step of 1 cm. 534 | * TRAY = Axis-y translation with a step of 1 cm. 535 | * TRAZ = Axis-z translation with a step of 1 cm. 536 | * ROT = Rotation with a step of 10 degrees. 537 | 538 | All of these movements are created and added to a list called movements 539 | 540 | ```python 541 | TRAX = Movement("trax", step[0]) 542 | TRAY = Movement("tray", step[1]) 543 | TRAZ = Movement("traz", step[2]) 544 | ROT = Movement("rot", step[3]) 545 | movements = [TRAX, TRAY, TRAZ, ROT] 546 | ``` 547 | 548 | The creation of this list will help us to iterate through all the movements depending on the keyboard input. If we get a _w_, we will move to the next movement. If we get an _s_, we will move to the previous movement. In both cases we will print the name of the movement we are in. The exciting part comes when we get a _d_ or an _a_. For those cases, we will move a positive or negative step, respectively, in the movement we are in. 549 | 550 | ```python 551 | while(True): 552 | print("T now:\n", T) 553 | print("Enter key: ") 554 | key = getKey() 555 | 556 | if(key == "w"): 557 | i = (i+1)%4 558 | print("Current movement is: ", movements[i].name, "\n") 559 | 560 | elif(key == "s"): 561 | i = (i-1)%4 562 | print("Current movement is: ", movements[i].name, "\n") 563 | 564 | elif(key == "d"): 565 | print("Movement: ", movements[i].name, " of: ", movements[i].step) 566 | T = moveRobot(T, l, movements[i], 1) 567 | elif(key == "a"): 568 | print("Movement: ", movements[i].name, " of: ", -1*movements[i].step) 569 | T = moveRobot(T, l, movements[i], -1) 570 | ``` 571 | 572 | As you can see from the function above, all the magic occurs in the function _moveRobot_. This function receives as parameters the MTH of the end effector, _T_, the length of the links saved in the list _l_, the corresponding movement and a 1 if the step will be positive or a -1 in case it is negative. 573 | 574 | ### How do we move our Phantom X? 575 | 576 | Well, the interesting part comes when we try to understand how the function _moveRobot_ works. 577 | 578 | ```python 579 | def moveRobot(T, l, movement, direction): 580 | codo = 1 #0: down, 1: up 581 | previous_T = T.copy() 582 | T = changeMatrix(T, movement, direction) 583 | try: 584 | q = np.degrees(getInvKin(T, l)) 585 | goal_position_raw = deg2raw(q[codo,:]) 586 | moveJoints(goal_position_raw, q[codo, :]) 587 | except: 588 | print("Exception...\n") 589 | T = previous_T.copy() 590 | return T 591 | ``` 592 | 593 | The first thing that we can see is the choice of the elbow. We can have elbow up or down for our 2R mechanism explained in the section of inverse kinematics. After this, we copy the MTH of the end effector in the variable _previous T_ just in case we reach a position that is outside the robot's workspace. After this, we call the function _changeMatrix_, (we will see it later, don't worry), which updates the MTH. Having the MTH, we can now call the inverse kinematics function called _getInvKin_ and transform the answer that we get from radians to degrees and save it in the variable _q_. This last variable contains the values of the joints in both configurations: elbow up and elbow down. That's why we have to choose which one we want to use. 594 | 595 | With the value of _q_ we can now call our mapping function that converts degrees to raw data which is directly received by the joints of our Phantom X (-150 to 150 degrees is now 0 to 1023). And with this information we can now call our function _moveJoints_ and see the magic happening. But before doing that, let's explain the functions that we are missing. 596 | 597 | ### Change matrix function 598 | 599 | With this function, we modify the values of the MTH that we get. If we have the movement of rotation, we multiply the rotation matrix of the MTH by a rotation in the y-axis on an angle equal to step, taking into account if it's positive or negative. (Do you remember the 1 and -1? Yeah, it's exactly here where we use it). 600 | 601 | On the other hand, if we get a translation movement, we will change the component of the vector of position depending on the axis that we are interested in and that's it :) 602 | 603 | ```python 604 | def changeMatrix(T, movement, direction): 605 | if movement.name == "rot": 606 | print("Rotation") 607 | angle = math.radians(movement.step)*direction 608 | c = math.cos(angle) 609 | s = math.sin(angle) 610 | rot_y = np.array([[c, 0, s], [0, 1, 0], [-s, 0, c]]) 611 | T[0:3, 0:3] = np.dot(T[0:3, 0:3],rot_y) 612 | 613 | else: 614 | print("Traslation") 615 | if(movement.name == "trax"): row = 0 616 | elif(movement.name == "tray"): row = 1 617 | elif(movement.name == "traz"): row = 2 618 | 619 | T[row, 3] = T[row, 3] + movement.step * direction 620 | return T 621 | ``` 622 | 623 | ### Moving the joints 624 | 625 | Finally with the function, we call the service dynamixel command and pass the corresponding parameters for each joint using the raw data that we calculated. 626 | 627 | ```python 628 | def moveJoints(goal_position_raw, q): 629 | motors_ids = [1,2,3,4] 630 | for i in reversed(range(len(motors_ids))): 631 | jointCommand('', motors_ids[i], 'Goal_Position', goal_position_raw[i], 0.5) 632 | print("Moving ID:", motors_ids[i], " Angle: ", q[i], " Raw: ", goal_position_raw[i]) 633 | ``` 634 | 635 | Now we have all the ingredients ready for this great recipe. Let's see how it tastes in the following link: [Video - Motion in robot's workspace](https://www.youtube.com/watch?v=eJQ3HbZY1dU). 636 | 637 | 638 | ## Conclusions 639 | 640 | * At pick and place application with PhantomX robot, some problems were identified in the orientation that were initially attributed to code, but with a more detailed analysis, some physical problems with the robot were identified, such as joint backslash due to looseness in assembly and possibly servomotors gearbox wear, that modified severely the robot orientation thanks to the links weight, especially when the center of gravity was further from the base, so is it highly recommended to advise to the next students using this robots to take in account these problems that can lead to wrong conclusions with the code. 641 | 642 | * Learning to use tools may require a little bit of time but after a while you will figure out how it works. We, as engineering students, shouldn't worry too much about this. The thing that really matters and that can make a big difference is the understanding of the core concepts on which the whole system is based. In this case, a good understanding of the inverse kinematics allowed us to do our own implementation which was fundamental to implement the solution of both applications. 643 | -------------------------------------------------------------------------------- /getInvKin.py: -------------------------------------------------------------------------------- 1 | from numpy import array 2 | import numpy as np 3 | import math 4 | 5 | def getInvKin(T, l): 6 | """ 7 | Cinematica inversa para robot Phantom X 8 | Por: Maria Alejandra Arias Frontanilla 9 | Entradas: 10 | T: Matriz 4x4 con posición y orientación de la herramienta 11 | l: Longitud de eslabones 12 | Salida: 13 | q: Matriz 2x4 con valores de las 4 articulaciones en grados. 14 | Row 1: codo abajo 15 | Row 2: codo arriba 16 | [q1d q2d q3d q4d] 17 | [q1u q2u q3u q4u] 18 | """ 19 | try: 20 | q = np.double([[0, 0, 0, 0], [0, 0, 0, 0]]) 21 | 22 | #%Cálculo de la primera articulación en radianes 23 | q[0,0] = math.atan2(T[1,3], T[0,3]) #Codo abajo 24 | q[1,0] = q[0,0] #Codo arriba 25 | 26 | 27 | #Desacople de muñeca: 28 | #Cálculo de la posición de la muñeca W 29 | Pos_w = T[0:3, 3] - l[3]*T[0:3, 2] 30 | #print("Pos_w: ", Pos_w) 31 | #Solución de mecanismo 2R para q2 y q3 32 | h = round(Pos_w[2] - l[0],3) 33 | r = round(math.sqrt(Pos_w[0]**2 + Pos_w[1]**2),3) 34 | #print("h: ", h) 35 | #print("r: ", r) 36 | #Codo abajo: 37 | 38 | #Tercera articulación en radianes 39 | 40 | q[0,2] = math.acos(round((r**2+h**2-l[1]**2-l[2]**2)/(2*l[1]*l[2]), 2)) 41 | #Segunda articulación en radianes sin offset 42 | q[0,1] = math.atan2(h,r) - math.atan2(l[2]*math.sin(q[0,2]), l[1]+l[2]*math.cos(q[0,2])) 43 | #Teniendo en cuenta offset en la segunda articulación 44 | q[0,1] = q[0,1] - math.pi/2 45 | 46 | #Codo arriba: 47 | 48 | #Tercera articulación en radianes: negativo de codo abajo para q3 49 | q[1,2]= -q[0,2] 50 | #Segunda articulación en radianes sin offset 51 | q[1,1] = math.atan2(h,r) + math.atan2(l[2]*math.sin(q[0,2]), l[1]+l[2]*math.cos(q[0,2])) 52 | #Teniendo en cuenta offset en la segunda articulación 53 | q[1,1] = q[1,1] - math.pi/2; 54 | 55 | #Obtención de valor de cuarta articulación usando vector |a| de T 56 | phi = math.atan2(T[2,2], math.sqrt(T[0,2]**2 +T[1,2]**2)) - math.pi/2 57 | q[0,3] = phi - q[0,1] -q[0,2] 58 | q[1,3] = phi - q[1,1] -q[1,2] 59 | 60 | # print("Codo abajo: ") 61 | # print("1: ", q[0,0]) 62 | # print("2: ", q[0,1]) 63 | # print("3: ", q[0,2]) 64 | # print("4: ", q[0,3]) 65 | 66 | # print("Codo arriba: ") 67 | # print("1: ", q[1,0]) 68 | # print("2: ", q[1,1]) 69 | # print("3: ", q[1,2]) 70 | # print("4: ", q[1,3]) 71 | return q 72 | 73 | except ValueError: 74 | print("That position can't be reached with this configuration. Try another movement or configuration") 75 | 76 | -------------------------------------------------------------------------------- /images/2_R.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/2_R.png -------------------------------------------------------------------------------- /images/PosW.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/PosW.png -------------------------------------------------------------------------------- /images/eq_w.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/eq_w.png -------------------------------------------------------------------------------- /images/firstJoint.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/firstJoint.png -------------------------------------------------------------------------------- /images/matrixT.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/matrixT.png -------------------------------------------------------------------------------- /images/parts.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/parts.png -------------------------------------------------------------------------------- /images/phi.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/phi.png -------------------------------------------------------------------------------- /images/theta_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/theta_2.png -------------------------------------------------------------------------------- /images/theta_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/theta_3.png -------------------------------------------------------------------------------- /images/tray.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/tray.png -------------------------------------------------------------------------------- /images/wsplot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/images/wsplot.png -------------------------------------------------------------------------------- /invKin.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | 4 | """ 5 | Allows to use the service dynamixel_command 6 | """ 7 | from numpy import array 8 | from sqlalchemy import true 9 | import rospy 10 | import time 11 | from std_msgs.msg import String 12 | from dynamixel_workbench_msgs.srv import DynamixelCommand 13 | 14 | 15 | # keyboard 16 | import termios, sys, os 17 | 18 | #matrix 19 | import numpy as np 20 | 21 | #inverse kinematics 22 | from getInvKin import * 23 | 24 | TERMIOS = termios 25 | 26 | def getKey(): #Captura la letra que ingresa del teclado 27 | fd = sys.stdin.fileno() 28 | old = termios.tcgetattr(fd) 29 | new = termios.tcgetattr(fd) 30 | new[3] = new[3] & ~TERMIOS.ICANON & ~TERMIOS.ECHO 31 | new[6][TERMIOS.VMIN] = 1 32 | new[6][TERMIOS.VTIME] = 0 33 | termios.tcsetattr(fd, TERMIOS.TCSANOW, new) 34 | c = None 35 | try: 36 | c = os.read(fd, 1) 37 | finally: 38 | termios.tcsetattr(fd, TERMIOS.TCSAFLUSH, old) 39 | c = str(c).replace('b', "").replace('\'', "") 40 | return c 41 | 42 | def jointCommand(command, id_num, addr_name, value, time): 43 | #rospy.init_node('joint_node', anonymous=False) 44 | rospy.wait_for_service('dynamixel_workbench/dynamixel_command') 45 | try: 46 | dynamixel_command = rospy.ServiceProxy('/dynamixel_workbench/dynamixel_command', DynamixelCommand) 47 | result = dynamixel_command(command,id_num,addr_name,value) 48 | rospy.sleep(time) 49 | return result.comm_result 50 | except rospy.ServiceException as exc: 51 | print(str(exc)) 52 | 53 | def deg2raw(input_list: list = [0,0,0,0], min_deg: int = -150, max_deg: int = 150)->list: 54 | out_list = [0,0,0,0] 55 | for i in range(len(input_list)): 56 | out_list[i] = int( ((input_list[i] - min_deg)*1023)/(max_deg-min_deg) ) 57 | return out_list 58 | 59 | 60 | class Movement: 61 | def __init__(self, name , step): 62 | self.name = name 63 | self.step = step 64 | 65 | def moveRobot(T, l, movement, direction): 66 | codo = 1 #0: down, 1: up 67 | previous_T = T.copy() 68 | T = changeMatrix(T, movement, direction) 69 | try: 70 | q = np.degrees(getInvKin(T, l)) 71 | goal_position_raw = deg2raw(q[codo,:]) 72 | moveJoints(goal_position_raw, q[codo, :]) 73 | except: 74 | print("Exception...\n") 75 | T = previous_T.copy() 76 | return T 77 | 78 | 79 | 80 | 81 | def changeMatrix(T, movement, direction): 82 | if movement.name == "rot": 83 | print("Rotation") 84 | angle = math.radians(movement.step)*direction 85 | c = math.cos(angle) 86 | s = math.sin(angle) 87 | rot_y = np.array([[c, 0, s], [0, 1, 0], [-s, 0, c]]) 88 | T[0:3, 0:3] = np.dot(T[0:3, 0:3],rot_y) 89 | 90 | else: 91 | print("Traslation") 92 | if(movement.name == "trax"): row = 0 93 | elif(movement.name == "tray"): row = 1 94 | elif(movement.name == "traz"): row = 2 95 | 96 | T[row, 3] = T[row, 3] + movement.step * direction 97 | return T 98 | 99 | 100 | 101 | def moveJoints(goal_position_raw, q): 102 | motors_ids = [1,2,3,4] 103 | for i in reversed(range(len(motors_ids))): 104 | jointCommand('', motors_ids[i], 'Goal_Position', goal_position_raw[i], 0.5) 105 | print("Moving ID:", motors_ids[i], " Angle: ", q[i], " Raw: ", goal_position_raw[i]) 106 | 107 | def adjustTorque(): 108 | motors_ids = [1,2,3,4] 109 | torque = [500, 800, 500, 500] 110 | for i in range(len(motors_ids)): 111 | jointCommand('', motors_ids[i], 'Torque_Limit', torque[i], 0) 112 | 113 | def enableTorque(): 114 | motors_ids = [1,2,3,4] 115 | for i in range(len(motors_ids)): 116 | jointCommand('', motors_ids[i], 'Torque_Enable', 1, 0) 117 | 118 | 119 | def main(step: list = [1, 1 , 1, 10]): 120 | codo = 1 121 | l = [14.5, 10.7, 10.7, 9] 122 | 123 | T = np.array([[1.0, 0.0, 0.0, 0.0], 124 | [0.0, 1.0, 0.0, 0.0], 125 | [0.0, 0.0, 1.0, 44.9], 126 | [0.0, 0.0, 0.0, 1.0]]) 127 | 128 | TRAX = Movement("trax", step[0]) 129 | TRAY = Movement("tray", step[1]) 130 | TRAZ = Movement("traz", step[2]) 131 | ROT = Movement("rot", step[3]) 132 | movements = [TRAX, TRAY, TRAZ, ROT] 133 | 134 | print("--- Inverse Kinematics with Python and ROS ---") 135 | 136 | #Enable torque 137 | 138 | enableTorque() 139 | #Torque adjustment 140 | adjustTorque() 141 | 142 | # For initial position 143 | print("Initial Position: ", T[:,3]) 144 | q = np.degrees(getInvKin(T, l)) 145 | goal_position_raw = deg2raw(q[codo,:]) 146 | moveJoints(goal_position_raw, q[codo, :]) 147 | 148 | i = 0 149 | 150 | while(True): 151 | print("T now:\n", T) 152 | print("Enter key: ") 153 | key = getKey() 154 | 155 | if(key == "w"): 156 | i = (i+1)%4 157 | print("Current movement is: ", movements[i].name, "\n") 158 | 159 | elif(key == "s"): 160 | i = (i-1)%4 161 | print("Current movement is: ", movements[i].name, "\n") 162 | 163 | elif(key == "d"): 164 | print("Movement: ", movements[i].name, " of: ", movements[i].step) 165 | T = moveRobot(T, l, movements[i], 1) 166 | elif(key == "a"): 167 | print("Movement: ", movements[i].name, " of: ", -1*movements[i].step) 168 | T = moveRobot(T, l, movements[i], -1) 169 | 170 | if __name__ == '__main__': 171 | try: 172 | main() 173 | except rospy.ROSInterruptException: 174 | pass 175 | -------------------------------------------------------------------------------- /invKinPhantom.m: -------------------------------------------------------------------------------- 1 | %{ 2 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 | Cinematica inversa para robot Phantom X 4 | Por: Maria Alejandra Arias Frontanilla 5 | Entradas: 6 | T: Matriz 4x4 con posición y orientación de la herramienta 7 | l: Longitud de eslabones 8 | Salida: 9 | q: Matriz 2x4 con valores de las 4 articulaciones en grados. 10 | Row 1: codo abajo 11 | Row 2: codo arriba 12 | [q1d q2d q3d q4d] 13 | [q1u q2u q3u q4u] 14 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 15 | %} 16 | 17 | function q = invKinPhantom(T, l) 18 | %Cálculo de la primera articulación en radianes 19 | q(1,1) = atan2(T(2,4), T(1,4)); 20 | q(2,1) = q(1,1); 21 | 22 | %Desacople de muñeca: 23 | %Cálculo de la posición de la muñeca W 24 | Pos_w = T(1:3, 4) - l(4)*T(1:3, 3) 25 | 26 | %Solución de mecanismo 2R para q2 y q3 27 | h = Pos_w(3) - l(1) 28 | r = sqrt(Pos_w(1)^2 + Pos_w(2)^2) 29 | 30 | D = (r^2+h^2-l(2)^2-l(3)^2)/(2*l(2)*l(3)) 31 | 32 | % Codo abajo: 33 | 34 | %Tercera articulación en radianes 35 | q(1,3) = acos((r^2+h^2-l(2)^2-l(3)^2)/(2*l(2)*l(3))); 36 | 37 | %Segunda articulación en radianes sin offset 38 | q(1,2) = atan2(h,r) - atan2(l(3)*sin(q(1,3)), l(2)+l(3)*cos(q(1,3))); 39 | %Teniendo en cuenta offset en la segunda articulación 40 | q(1,2) = q(1,2) - pi/2; 41 | 42 | %Codo arriba: 43 | 44 | % %Tercera articulación en radianes: negativo de codo abajo para q3 45 | q(2,3) = -q(1,3); 46 | %Segunda articulación en radianes sin offset 47 | q(2,2) = atan2(h,r) + atan2(l(3)*sin(q(1,3)), l(2)+l(3)*cos(q(1,3))); 48 | %Teniendo en cuenta offset en la segunda articulación 49 | q(2,2) = q(2,2) - pi/2; 50 | 51 | 52 | %Obtención de valor de cuarta articulación usando vector |a| de T 53 | phi = atan2(T(3,3), sqrt(T(1,3)^2 +T(2,3)^2)) - pi/2; 54 | q(1,4) = phi - q(1,2) -q(1,3); 55 | q(2,4) = phi - q(2,2) -q(2,3); 56 | 57 | 58 | if ~isreal(q(1,3)) 59 | q(:,:) = NaN; 60 | end 61 | 62 | end -------------------------------------------------------------------------------- /pickplace.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import numpy as np 4 | import time 5 | 6 | # ROS libraries 7 | import rospy 8 | from std_msgs.msg import String 9 | from dynamixel_workbench_msgs.srv import DynamixelCommand 10 | 11 | 12 | #import petercorke toolbox for visualization 13 | from roboticstoolbox import * 14 | from spatialmath.pose3d import * 15 | 16 | #inverse kinematics library 17 | from getInvKin import * 18 | 19 | motors_ids = [1,2,3,4,5] 20 | phantomX_l = [14.5, 10.63, 10.65, 8.97] # Length of links 21 | 22 | 23 | # phantomX = DHRobot( 24 | # [ 25 | # RevoluteDH(alpha =np.pi/2, a=0, d=phantomX_l[0], offset=0, qlim=[-3*np.pi/4, 3*np.pi/4]), 26 | # RevoluteDH(alpha =0, a=phantomX_l[1], d=0, offset=np.pi/2, qlim=[-3*np.pi/4, 3*np.pi/4]), 27 | # RevoluteDH(alpha =0, a=phantomX_l[2], d=0, offset=0, qlim=[-3*np.pi/4, 3*np.pi/4]), 28 | # RevoluteDH(alpha =0, a=0, d=0, offset=0, qlim = [-3*np.pi/4, 3*np.pi/4]), 29 | # ], 30 | # name="PhantomX", 31 | # tool = SE3(np.array([[ 0, 0, 1, phantomX_l[3]], 32 | # [-1, 0, 0, 0], 33 | # [ 0, -1, 0, 0], 34 | # [ 0, 0, 0, 1]])) 35 | # ) 36 | 37 | def deg2raw(input_list: list = [0,0,0,0], min_deg: int = -150, max_deg: int = 150)->list: 38 | """ 39 | Convert degrees array to 10 bit motor control value. 40 | """ 41 | 42 | out_list = [0,0,0,0] 43 | for i in range(len(input_list)): 44 | out_list[i] = int( ((input_list[i] - min_deg)*1024)/(max_deg-min_deg) ) 45 | return out_list 46 | 47 | 48 | 49 | 50 | def jointCommand(command, id_num, addr_name, value, time): 51 | 52 | """ 53 | Make a request to a the "dynamixel_command" ros service to modify a 54 | parameter such as position or torque of the motor specified by the "id_num" 55 | parameter. 56 | """ 57 | rospy.wait_for_service('dynamixel_workbench/dynamixel_command') 58 | try: 59 | dynamixel_command = rospy.ServiceProxy('/dynamixel_workbench/dynamixel_command', DynamixelCommand) 60 | result = dynamixel_command(command,id_num,addr_name,value) 61 | rospy.sleep(time) 62 | return result.comm_result 63 | except rospy.ServiceException as exc: 64 | print(str(exc)) 65 | 66 | 67 | def getQ(x:float=4, y:float=4, z:float=5, phi:float=-np.pi)->np.array: 68 | """ 69 | Generates the MTH matrix to execute inverse kinematics and returns 70 | an array with the joint position. 71 | """ 72 | 73 | MTH = SE3(x,y,z)*SE3.Ry(phi) 74 | print(MTH) 75 | q = getInvKin(MTH.data[0], phantomX_l) 76 | return q[1] 77 | 78 | def setFullPostion(q:np.array = np.array([0, 0, 0, 0]) )->None: 79 | """ 80 | Set all the joints of the robot to the position specified by the input array 81 | """ 82 | #phantomX.plot(q) 83 | q_raw=deg2raw(np.degrees(q)) 84 | print(q_raw) 85 | jointCommand('', motors_ids[0], 'Goal_Position', q_raw[0], 0) 86 | jointCommand('', motors_ids[1], 'Goal_Position', q_raw[1], 0) 87 | jointCommand('', motors_ids[2], 'Goal_Position', q_raw[2], 0) 88 | jointCommand('', motors_ids[3], 'Goal_Position', q_raw[3], 0) 89 | 90 | def setGripperPosition(servoValue:int=0)->None: 91 | """ 92 | Set the gripper position to the specified 0 to 1023 value 93 | """ 94 | 95 | jointCommand('', motors_ids[4], 'Goal_Position', servoValue, 0) 96 | 97 | def main(): 98 | 99 | # Movement parameters 100 | home_x_position = 10 101 | home_y_position = 0 102 | 103 | home_to_base_x_postion = 10 104 | home_to_base_y_postion = -12 105 | 106 | base_x_position = 0 107 | base_y_position =-12 108 | 109 | home_to_load_x_postion = 10 110 | home_to_load_y_postion = 12 111 | 112 | load_x_position = 0 113 | load_y_position = 12 114 | 115 | free_movement_safe_height = 12 116 | base_gripper_height = 4 117 | load_gripper_height = 4.5 118 | load_over_base_height = 7 119 | 120 | # Gripper values 121 | base_gripper_closed_value = 348 122 | load_gripper_closed_value = 348 123 | gripper_open_value = 512 124 | 125 | 126 | # Number of steps of each movement phase 127 | home_to_base_x_steps = 10 128 | home_to_base_y_steps = 10 129 | home_to_load_x_steps = 10 130 | home_to_load_y_steps = 10 131 | 132 | pick_up_base_down_steps = 20 133 | pick_up_base_up_steps = 20 134 | place_base_down_steps = 20 135 | home_up_steps = 20 136 | pick_up_load_down_steps = 20 137 | pick_up_load_up_steps = 20 138 | place_load_down_steps = 20 139 | 140 | # Trajectory generation: 141 | 142 | 143 | q_vector1 = np.zeros((1 + pick_up_base_down_steps+home_to_base_y_steps+home_to_base_x_steps,4)) 144 | q_vector2 = np.zeros((pick_up_base_up_steps + place_base_down_steps+home_to_base_x_steps+home_to_base_y_steps,4)) 145 | q_vector3 = np.zeros((home_up_steps + pick_up_load_down_steps+home_to_load_y_steps+home_to_load_x_steps,4)) 146 | q_vector4 = np.zeros((pick_up_load_up_steps +home_to_load_x_steps+home_to_load_y_steps+ place_load_down_steps,4)) 147 | 148 | # 1-homming 149 | q_vector1[0] = getQ(x = home_x_position, 150 | y = home_y_position, 151 | z = free_movement_safe_height 152 | ) 153 | 154 | # 1-home->base_position 155 | 156 | for i in range(home_to_base_y_steps): 157 | q_vector1[i+1] = getQ(x=home_x_position, 158 | y=home_y_position+ 159 | (((home_to_base_y_postion-home_y_position) 160 | *(i+1))/home_to_base_y_steps), 161 | z = free_movement_safe_height 162 | ) 163 | for i in range(home_to_base_x_steps): 164 | q_vector1[i+home_to_base_y_steps+1] = getQ(x=home_to_base_x_postion+ 165 | (((base_x_position-home_to_base_x_postion) 166 | *(i+1))/home_to_base_x_steps), 167 | y = home_to_base_y_postion, 168 | z = free_movement_safe_height 169 | ) 170 | # 2 pick-up-base_down 171 | 172 | for i in range(pick_up_base_down_steps): 173 | q_vector1[i+home_to_base_y_steps + home_to_base_x_steps+1] = getQ(x=base_x_position, 174 | y=base_y_position, 175 | z=free_movement_safe_height- 176 | (((free_movement_safe_height-base_gripper_height) 177 | *(i+1))/pick_up_base_down_steps) 178 | ) 179 | # 3 pick-up-base_up 180 | 181 | for i in range(pick_up_base_up_steps): 182 | q_vector2[i] = getQ(x=base_x_position, 183 | y=base_y_position, 184 | z=base_gripper_height+ 185 | (((free_movement_safe_height-base_gripper_height) 186 | *(i+1))/pick_up_base_up_steps) 187 | ) 188 | 189 | # 4-base_position->home 190 | 191 | for i in range(home_to_base_x_steps): 192 | q_vector2[i+pick_up_base_up_steps] = getQ(x=base_x_position+ 193 | (((home_to_base_x_postion-base_x_position) 194 | *(i+1))/home_to_base_x_steps), 195 | y = base_y_position, 196 | z = free_movement_safe_height 197 | ) 198 | for i in range(home_to_base_y_steps): 199 | q_vector2[pick_up_base_up_steps+home_to_base_x_steps+i] = getQ(x=home_to_base_x_postion, 200 | y=home_to_base_y_postion+ 201 | (((home_y_position-home_to_base_y_postion) 202 | *(i+1))/home_to_base_y_steps), 203 | z = free_movement_safe_height 204 | ) 205 | # 5-place-base_down 206 | 207 | for i in range(place_base_down_steps): 208 | q_vector2[i+pick_up_base_up_steps+home_to_base_x_steps+home_to_base_x_steps] = getQ(x=home_x_position, 209 | y=home_y_position, 210 | z=free_movement_safe_height- 211 | (((free_movement_safe_height-base_gripper_height) 212 | *(i+1))/place_base_down_steps) 213 | ) 214 | # 6-home-up 215 | 216 | for i in range(home_up_steps): 217 | q_vector3[i] = getQ(x=home_x_position, 218 | y=home_y_position, 219 | z=base_gripper_height+ 220 | (((free_movement_safe_height-base_gripper_height) 221 | *(i+1))/home_up_steps) 222 | ) 223 | 224 | # 7-home->load 225 | 226 | for i in range(home_to_load_y_steps): 227 | q_vector3[home_up_steps+i] = getQ(x=home_x_position, 228 | y=home_y_position+ 229 | (((home_to_load_y_postion-home_y_position) 230 | *(i+1))/home_to_load_y_steps), 231 | z = free_movement_safe_height 232 | ) 233 | for i in range(home_to_base_x_steps): 234 | q_vector3[i+home_to_load_y_steps+home_up_steps] = getQ(x=home_to_load_x_postion+ 235 | (((load_x_position-home_to_load_x_postion) 236 | *(i+1))/home_to_base_x_steps), 237 | y = home_to_load_y_postion, 238 | z = free_movement_safe_height 239 | ) 240 | 241 | # 8-pick-up-load_down 242 | for i in range(pick_up_load_down_steps): 243 | q_vector3[i+home_to_load_y_steps+home_to_base_x_steps+home_up_steps] = getQ(x=load_x_position, 244 | y=load_y_position, 245 | z=free_movement_safe_height- 246 | (((free_movement_safe_height-load_gripper_height) 247 | *(i+1))/pick_up_load_down_steps) 248 | ) 249 | 250 | #9-pick-up-load_up 251 | for i in range(pick_up_load_up_steps): 252 | q_vector4[i] = getQ(x=load_x_position, 253 | y=load_y_position, 254 | z=load_gripper_height+ 255 | (((free_movement_safe_height-load_gripper_height) 256 | *(i+1))/pick_up_load_up_steps) 257 | ) 258 | 259 | #10 load->home 260 | 261 | for i in range(home_to_load_x_steps): 262 | q_vector4[i+pick_up_load_up_steps] = getQ(x=load_x_position+ 263 | (((home_to_load_x_postion-load_x_position) 264 | *(i+1))/home_to_load_x_steps), 265 | y = load_y_position, 266 | z = free_movement_safe_height 267 | ) 268 | for i in range(home_to_load_y_steps): 269 | q_vector4[pick_up_load_up_steps+home_to_load_x_steps+i] = getQ(x=home_to_load_x_postion, 270 | y=home_to_load_y_postion+ 271 | (((home_y_position-home_to_load_y_postion) 272 | *(i+1))/home_to_load_y_steps), 273 | z = free_movement_safe_height 274 | ) 275 | 276 | 277 | #11 place-load-down 278 | 279 | for i in range(place_load_down_steps): 280 | q_vector4[i+pick_up_base_up_steps+home_to_load_x_steps+home_to_load_y_steps] = getQ(x=home_x_position, 281 | y=home_y_position, 282 | z=free_movement_safe_height- 283 | (((free_movement_safe_height-load_over_base_height) 284 | *(i+1))/place_load_down_steps) 285 | ) 286 | 287 | 288 | q_vector = np.concatenate((q_vector1,q_vector2,q_vector3,q_vector4),axis=0) 289 | 290 | # Motors configuration 291 | 292 | jointCommand('', motors_ids[0], 'Torque_Limit', 300, 0) 293 | jointCommand('', motors_ids[1], 'Torque_Limit', 500, 0) 294 | jointCommand('', motors_ids[2], 'Torque_Limit', 300, 0) 295 | jointCommand('', motors_ids[3], 'Torque_Limit', 300, 0) 296 | jointCommand('', motors_ids[4], 'Torque_Limit', 300, 0) 297 | 298 | jointCommand('', motors_ids[0], 'Torque_Enable', 1, 0) 299 | jointCommand('', motors_ids[1], 'Torque_Enable', 1, 0) 300 | jointCommand('', motors_ids[2], 'Torque_Enable', 1, 0) 301 | jointCommand('', motors_ids[3], 'Torque_Enable', 1, 0) 302 | jointCommand('', motors_ids[4], 'Torque_Enable', 1, 0) 303 | 304 | # Initial gripper open position 305 | setGripperPosition(gripper_open_value) 306 | 307 | # Movement 308 | time.sleep(1) 309 | for i in q_vector1: 310 | setFullPostion(i) 311 | setGripperPosition(base_gripper_closed_value) 312 | 313 | time.sleep(0.8) # Stabnilization delay 314 | for i in q_vector2: 315 | setFullPostion(i) 316 | setGripperPosition(gripper_open_value) 317 | time.sleep(0.8) 318 | for i in q_vector3: 319 | setFullPostion(i) 320 | setGripperPosition(load_gripper_closed_value) 321 | time.sleep(0.8) 322 | for i in q_vector4: 323 | setFullPostion(i) 324 | setGripperPosition(gripper_open_value) 325 | time.sleep(0.5) 326 | 327 | 328 | def main1(): 329 | setFullPostion(np.array([0,0,0,np.pi])) 330 | 331 | time.sleep(10) 332 | 333 | def test(): 334 | phantomX.plot([0,0,0,0],jointaxes = True, backend="pyplot") 335 | pos = [13,13,4] 336 | for i in range(100): 337 | setFullPostion(np.array([pos[0], pos[1], pos[2],-np.pi])) 338 | time.sleep(0.1) 339 | pos[0]-=0.1 340 | pos[1]-=0.1 341 | #pos[2]+=0.1 342 | 343 | time.sleep(20) 344 | 345 | 346 | if __name__ == '__main__': 347 | try: 348 | main() 349 | except rospy.ROSInterruptException: 350 | pass -------------------------------------------------------------------------------- /triangle_2R.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fegonzalez7/ROS_Robotics_Lab3/758258f2f16f3f84d2215e1d6e7dcb7f93c8b3d7/triangle_2R.png -------------------------------------------------------------------------------- /workspace.asv: -------------------------------------------------------------------------------- 1 | %% Lab 2 - Matlab and ROS with PhantomX 2 | clear 3 | clc 4 | 5 | l = 0.01.*[14.5, 10.63, 10.65, 8.97]; % Length of links 6 | L(1) = Link('revolute','alpha',pi/2,'a',0, 'd',l(1),'offset',0, 'qlim',[-3*pi/4 3*pi/4]); 7 | L(2) = Link('revolute','alpha',0, 'a',l(2),'d',0, 'offset',pi/2,'qlim',[-3*pi/4 3*pi/4]); 8 | L(3) = Link('revolute','alpha',0, 'a',l(3),'d',0, 'offset',0, 'qlim',[-3*pi/4 3*pi/4]); 9 | L(4) = Link('revolute','alpha',0, 'a',0, 'd',0, 'offset',0, 'qlim',[-3*pi/4 3*pi/4]); 10 | PhantomX = SerialLink(L,'name','Px'); 11 | 12 | %Tool orientation following NAO Convention 13 | PhantomX.tool = [0 0 1 l(4); -1 0 0 0; 0 -1 0 0; 0 0 0 1]; 14 | 15 | %% 2d workspace 16 | 17 | rng('default') 18 | samples = 100000; 19 | 20 | q2 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 21 | q3 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 22 | q4 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 23 | 24 | x_pos = zeros(samples,1); 25 | y_pos = zeros(samples,1); 26 | 27 | disp(x_pos(1)) 28 | 29 | for i = 1:1:samples 30 | MTH = PhantomX.fkine([0 q2(i) q3(i) q4(i)]); 31 | x_pos(i) = MTH(1,4); 32 | y_pos(i) = MTH(2,4); 33 | end 34 | 35 | plot(x_pos,y_pos); 36 | 37 | %% 2d boundary 38 | bound=boundary(x_pos,y_pos); 39 | plot(x_pos(bound),y_pos(bound)) 40 | 41 | %% 3d workspace 42 | rng('default') 43 | samples = 1000000; 44 | q1 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 45 | q2 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 46 | q3 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 47 | q4 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 48 | 49 | x_pos = zeros(samples,1); 50 | y_pos = zeros(samples,1); 51 | z_pos = zeros(samples,1); 52 | disp(x_pos(1)) 53 | 54 | for i = 1:1:samples 55 | MTH = PhantomX.fkine([q1(i) q2(i) q3(i) q4(i)]); 56 | x_pos(i) = MTH(1,4); 57 | y_pos(i) = MTH(2,4); 58 | z_pos(i) = MTH(3,4); 59 | 60 | end 61 | 62 | 63 | plot3(x_pos,y_pos,z_pos); 64 | 65 | %% 3d boundary 66 | bound=boundary(x_pos,y_pos,z_pos); 67 | trisurf(bound,x_pos,y_pos,z_pos,'Facecolor','red','FaceAlpha',0.1) -------------------------------------------------------------------------------- /workspace.m: -------------------------------------------------------------------------------- 1 | %% Lab 2 - Matlab and ROS with PhantomX 2 | clear 3 | clc 4 | 5 | l = [14.5, 10.63, 10.65, 8.97]; % Length of links 6 | L(1) = Link('revolute','alpha',pi/2,'a',0, 'd',l(1),'offset',0, 'qlim',[(-3*pi/4) (3*pi/4)]); 7 | L(2) = Link('revolute','alpha',0, 'a',l(2),'d',0, 'offset',pi/2,'qlim',[(-3*pi/4) (3*pi/4)]); 8 | L(3) = Link('revolute','alpha',0, 'a',l(3),'d',0, 'offset',0, 'qlim',[(-3*pi/4) (3*pi/4)]); 9 | L(4) = Link('revolute','alpha',0, 'a',0, 'd',0, 'offset',0, 'qlim',[(-3*pi/4) (3*pi/4)]); 10 | PhantomX = SerialLink(L,'name','Px'); 11 | 12 | 13 | %Tool orientation following NAO Convention 14 | PhantomX.tool = [0 0 1 l(4); -1 0 0 0; 0 -1 0 0; 0 0 0 1]; 15 | 16 | PhantomX.plot([0 pi/2 pi/2 pi/2]) 17 | 18 | %% 2d workspace 19 | 20 | rng('default') 21 | samples = 100000; 22 | 23 | q2 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 24 | q3 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 25 | q4 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 26 | 27 | x_pos = zeros(samples,1); 28 | y_pos = zeros(samples,1); 29 | 30 | for i = 1:1:samples 31 | MTH = PhantomX.fkine([0 q2(i) q3(i) q4(i)]); 32 | x_pos(i) = MTH(1,4); 33 | y_pos(i) = MTH(2,4); 34 | end 35 | 36 | plot(x_pos,y_pos); 37 | 38 | %% 2d boundary 39 | bound=boundary(x_pos,y_pos); 40 | plot(x_pos(bound),y_pos(bound)) 41 | 42 | %% 3d workspace 43 | rng('default') 44 | samples = 1000000; 45 | q1 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 46 | q2 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 47 | q3 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 48 | q4 = ((3*pi/2)*rand(samples,1))-((3*pi/4)*(ones(samples,1))); 49 | 50 | x_pos = zeros(samples,1); 51 | y_pos = zeros(samples,1); 52 | z_pos = zeros(samples,1); 53 | disp(x_pos(1)) 54 | 55 | for i = 1:1:samples 56 | MTH = PhantomX.fkine([q1(i) q2(i) q3(i) q4(i)]) 57 | x_pos(i) = MTH(1,4); 58 | y_pos(i) = MTH(2,4); 59 | z_pos(i) = MTH(3,4); 60 | end 61 | 62 | plot3(x_pos,y_pos,z_pos); 63 | 64 | %% 3d boundary 65 | bound=boundary(x_pos,y_pos,z_pos); 66 | trisurf(bound,x_pos,y_pos,z_pos) 67 | 68 | %% inverse kinematics 69 | 70 | clc 71 | MTH1 = [ [0.4945 -0.8575 -0.1420 2.2536], 72 | [-0.8242 -0.5145 0.2368 -3.7563], 73 | [-0.2761 -0.0000 -0.9611 -5.4777], 74 | [0 0 0 1.0000]]; 75 | 76 | 77 | PhantomX.ikine(MTH1,'mask',[1 1 1 1 0 0]) 78 | --------------------------------------------------------------------------------