├── README.md ├── carla_vehicle_annotator.py └── collect_dt_data.py /README.md: -------------------------------------------------------------------------------- 1 | # DriveTruth 2 | ## 1. Getting Started 3 | Install the latest version of CARLA, either from [their website](https://carla.readthedocs.io/en/0.9.11/) or [directly from the releases page on GitHub](https://github.com/carla-simulator/carla). Ensure you meet the system requirements and have plenty of hard disk space to collect data. 4 | 5 | Ensure you have the appropriate version of Python installed. At the time of writing, CARLA is meant to work with Python 3.7. You can use a service like anaconda to ensure you have the correct version of Python. 6 | 7 | Finally, install relevant packages for your Python installation, namely numpy, PIL, and OpenCV (cv2). If any "missing package" errors come up when you try to run, simply install that package. 8 | 9 | Finally, within your CARLA installation, create a new folder within the PythonAPI folder and name it whatever you want. Place the ``collect_dt_data.py`` and ``carla_vehicle_annotator.py`` files in that folder. 10 | 11 | To run, simply start the CARLA simulator with one shell [(see here for detailed instructions)](https://carla.readthedocs.io/en/0.9.11/start_quickstart/#running-carla) and in another shell, run ``collect_dt_data.py``. Parameters are listed below. 12 | 13 | ## 2. Parameters 14 | 15 | The following parameters are available to pass into ``collect_dt_data.py``. 16 | ### ``--host`` 17 | The IP of the host server (your CARLA instance), by default 127.0.0.1. Don't change this unless you're sure that CARLA is running on a different IP! 18 | 19 | ### ``--port, -p`` 20 | 21 | The TCP port to listen to for CARLA. Default is 2000. Again, don't change this unless you're sure CARLA is broadcasting to this port! 22 | 23 | ### ``--number-of-vehicles, -n`` 24 | 25 | The number of vehicles the simulation will attempt to spawn, provided the map has enough spawn points. Defaults to 100. 26 | 27 | ### ``--number-of-walkers, -w`` 28 | 29 | Number of pedestrians the simulation will attempt to spawn. Defaults to 50. 30 | 31 | ### ``--tm_port, -tm_p`` 32 | 33 | The port to communicate with CARLA's traffic manager. Defaults to 8000. Don't change this unless you're sure Traffic Manager is running on a different port! 34 | 35 | ### ``--out, -o`` 36 | 37 | The output location of the generated dataset. Defaults to ``/output//`` in the same directory as ``collect_dt_data.py``. 38 | 39 | ### ``--lidar`` 40 | 41 | If ``--lidar y`` (case sensitive) is given, the application will export a 3D pointmap of the LIDAR data. 42 | 43 | ### ``--fps`` 44 | FPS at which we run the data collection at. Every *fps* frames, the simulation will output the data collected in the frame. Defaults to 5. 45 | 46 | ### ``--time_limit`` 47 | How many seconds we should run a simulation for before resetting. 48 | 49 | 50 | ## 3. In-code Modifications 51 | ### Number of segments 52 | The ``segment_num`` and ``target_segment_limit`` variables control how many runs are generated. The ``time_limit`` parameter controls how long a single run is. We take ``segment_num`` runs of each map type, alternating between them, until we reach ``target_segment_limit`` for each map type. One can modify these variables as desired to collect as much or as little data as one wants. 53 | 54 | ### Maps 55 | By default, maps are classified as "residential" and "highway". One can reclassify the maps as they wish, passing in different map types through the ``map_types`` variable and defining an array with the map names in them within the ``run_simulation_instance`` function. Alternatively, one can remove the map name logic or simply pass in an invalid map type. In this case, DriveTruth will select from a pool of all available maps (automatically detecting user-made maps). 56 | 57 | If you add a map and are using map types, be sure to add it to the relevant list under the map type logic so that it will be utilized by the code. 58 | 59 | ### Weather/Time of Day 60 | 61 | By default, the weather is chosen from a list of presets, contained in the variable ``weather_presets``. Comments above this variable provide tips on how to redefine a custom weather distribution and time of day. 62 | 63 | ### Cars and Pedestrians 64 | 65 | By default, vehicles are generated from the ``blueprints`` variable (which gets all blueprints classified under ``vehicle.*``, which by default is all vehicles in CARLA but will also include any user-created vehicles under that classification). ``blueprints_p`` does the same for pedestrians. If one has a specific model in mind, one can alter which blueprints are grabbed to obtain the desired one. 66 | 67 | ### Tracking new objects 68 | To track a new object, follow the template provided by the traffic signs and traffic lights, labelled by ``TRACKING STEP`` in the code. 69 | 70 | First, figure out the numeric semantic tag of the object(s) you would like to track. The paper lists them, but you can also find a most up-to-date list [in the CARLA docs here](https://carla.readthedocs.io/en/latest/ref_sensors/#semantic-segmentation-camera). If you define a new semantic tag within CARLA, you can pass it in as well. 71 | 72 | For tracking step 1, we get the 3D bounding boxes of the object. simply call ``point_cloud_to_3d_bbox`` on sem_img, optionally specifying a debug filename and debug parameter, but passing in your semantic tags of the object(s) you wish to track in an array. For example, for traffic signs, the array [12] is passed, with 12 being the semantic tag for traffic lights. 73 | 74 | For tracking step 2, follow the code in tracking step 2, but substitute the bbox list with the bbox list you got in tracking step 1, adding the keys to a list and passing it into the snap processing. When calling the semantic auto annotate function, be sure to change ``gt_class`` to the class you want this object classified as. 75 | 76 | Finally, for tracking step 3, simply merge the results into the final dictionary. 77 | 78 | ### Modifying stored info / Storing 3D bounding boxes 79 | Tracked information is handled in the ``semantic_auto_annotate`` (static objects) and ``auto_annotate_lidar`` (dynamic objects) functions. You can modify these functions to store different data. For example, to store 3D bounding boxes, call ``get_bounding_box(o, camera, hwc)``, with hwc being False if dynamic object and True if static object, then store it in the output dictionary. 80 | 81 | ## 4. CARLA Modifications 82 | 83 | Because DriveTruth is built on top of CARLA, modifications to CARLA are compatible with DriveTruth. This means that the user can add new maps, vehicles, pedestrians, objects, semantic tags, and more and have DriveTruth work with them, allowing the dataset to be tailored to specific applications. 84 | 85 | [The CARLA docs provide many tutorials on modifying the engine](https://carla.readthedocs.io/en/latest/). Depending on what you do, you may or may not have to modify DriveTruth to account for the new changes. 86 | 87 | - If you add new vehicles or pedestrians, if they are classified under the same blueprints as their default counterparts (``vehicle.*`` or ``pedestrian.*`` respectively), no changes need to be made. See section 3, "Cars and Pedestrians", if you want to change the distribution of models, or only use a specific model. 88 | - If you add a new map, by default you need to classify it as "residential" or "highway" and add it to the list of maps. If you do not use map types this is not needed. See Section 3, "Maps" for how to change how DriveTruth uses maps 89 | - If you add new objects to maps, as long as the map loads no changes are needed. 90 | - If you add new semantic tags, no changes are needed, unless you override the existing semantic tags for traffic signs and traffic lights. In this case, you may need to update the semantic tags for each within the code, or comment them out if you don't want them. 91 | 92 | -------------------------------------------------------------------------------- /carla_vehicle_annotator.py: -------------------------------------------------------------------------------- 1 | ### Functions to convert 3D point cloud to 2D bbox adapted from code by Mukhlas Adib and CARLA's client_bounding_boxes.py 2 | 3 | ### CARLA Simulator and client_bounding_boxes.py are licensed under the terms of the MIT license 4 | ### For a copy, see 5 | ### For more information about CARLA Simulator, visit https://carla.org/ 6 | 7 | import numpy as np 8 | import PIL 9 | from PIL import Image 10 | from PIL import ImageDraw 11 | import json 12 | import pickle 13 | import os 14 | import glob 15 | import sys 16 | import cv2 17 | import carla 18 | 19 | 20 | # Special function that computes attributes based purely on the semantically-calculated bounding boxes 21 | def semantic_auto_annotate(objects, camera, lidar_data, ego_velocity, ego_location, max_dist = 100, min_detect = 5, show_img = None, gt_class = None): 22 | filtered_data = filter_lidar(lidar_data, camera, max_dist) 23 | if show_img != None: 24 | show_lidar(filtered_data, camera, show_img) 25 | 26 | ### Delete this section if object_idx issue has been fixed in CARLA 27 | filtered_data = np.array([p for p in filtered_data if p.object_idx != 0]) 28 | filtered_data = get_points_id(filtered_data, objects, camera, max_dist) 29 | ### 30 | 31 | visible_id, idx_counts = np.unique([p.object_idx for p in filtered_data], return_counts=True) 32 | visible_objects = [v for v in objects if v.id in visible_id] 33 | visible_objects= [v for v in objects if idx_counts[(visible_id == v.id).nonzero()[0]] >= min_detect] # min_detect controls the minimum number of lidar points to "see" an object, allowing occluded or distant objects to remain untracked 34 | # Now we have a dictionary storing the object 35 | annotated_dict = {} 36 | for o in visible_objects: 37 | object_bbox = get_2d_bb(o, camera, True) # We set the hasWorldCoord to true, because we do have a separate location for these objects! 38 | # Process this into left, up, right, bottom 39 | object_bbox = [object_bbox[0][0], object_bbox[0][1], object_bbox[1][0], object_bbox[1][1]] 40 | if gt_class is not None: 41 | object_class = gt_class 42 | else: 43 | object_class = "None" 44 | # Get relative velocity 45 | object_velocity = o.get_velocity() 46 | object_relative_velocity = object_velocity - ego_velocity 47 | # Get distance (This is mainly where the function differs from the below one) 48 | object_location = o.loc_given 49 | object_distance = object_location.distance(ego_location) 50 | annotated_dict[str(o.id)] = {'bbox': object_bbox, 'location': object_location, 'class': object_class, 'rel_velocity': object_relative_velocity, 'distance': object_distance} 51 | 52 | return annotated_dict 53 | # Collect the data in a single frame 54 | def auto_annotate_lidar_process(objects, camera, lidar_data, ego_velocity, ego_location, max_dist = 100, min_detect = 5, show_img = None, gt_class = None): 55 | filtered_data = filter_lidar(lidar_data, camera, max_dist) 56 | if show_img != None: 57 | show_lidar(filtered_data, camera, show_img) 58 | 59 | ### Delete this section if object_idx issue has been fixed in CARLA 60 | filtered_data = np.array([p for p in filtered_data if p.object_idx != 0]) 61 | filtered_data = get_points_id(filtered_data, objects, camera, max_dist) 62 | ### 63 | 64 | visible_id, idx_counts = np.unique([p.object_idx for p in filtered_data], return_counts=True) 65 | visible_objects = [v for v in objects if v.id in visible_id] 66 | visible_objects= [v for v in objects if idx_counts[(visible_id == v.id).nonzero()[0]] >= min_detect] 67 | # Now we have a dictionary storing the object 68 | annotated_dict = {} 69 | for o in visible_objects: 70 | object_bbox = get_2d_bb(o, camera) 71 | # Process this into left, up, right, bottom 72 | object_bbox = [object_bbox[0][0], object_bbox[0][1], object_bbox[1][0], object_bbox[1][1]] 73 | if gt_class is not None: 74 | object_class = gt_class 75 | else: 76 | object_class = "None" 77 | # Get relative velocity 78 | object_velocity = o.get_velocity() 79 | object_relative_velocity = object_velocity - ego_velocity 80 | # Get distance 81 | object_location = o.get_transform().location 82 | object_distance = object_location.distance(ego_location) 83 | annotated_dict[str(o.id)] = {'bbox': object_bbox, 'location': object_location, 'class': object_class, 'rel_velocity': object_relative_velocity, 'distance': object_distance} 84 | 85 | return annotated_dict 86 | 87 | 88 | ### From Mukhlas Adib, this function can debug occlusion filtering via depth image 89 | def auto_annotate_debug(vehicles, camera, depth_img, depth_show=False, max_dist=100, depth_margin=-1, patch_ratio=0.5, resize_ratio=0.5, json_path=None): 90 | vehicles = filter_angle_distance(vehicles, camera, max_dist) 91 | bounding_boxes_2d = [get_2d_bb(vehicle, camera) for vehicle in vehicles] 92 | if json_path is not None: 93 | vehicle_class = get_vehicle_class(vehicles, json_path) 94 | else: 95 | vehicle_class = [] 96 | filtered_out, removed_out, depth_area, depth_show = filter_occlusion_bbox(bounding_boxes_2d, vehicles, camera, depth_img, vehicle_class, depth_show, depth_margin, patch_ratio, resize_ratio) 97 | return filtered_out, removed_out, depth_area, depth_show 98 | 99 | 100 | 101 | ### Get camera intrinsic matrix 'k' 102 | def get_camera_intrinsic(sensor): 103 | VIEW_WIDTH = int(sensor.attributes['image_size_x']) 104 | VIEW_HEIGHT = int(sensor.attributes['image_size_y']) 105 | VIEW_FOV = int(float(sensor.attributes['fov'])) 106 | calibration = np.identity(3) 107 | calibration[0, 2] = VIEW_WIDTH / 2.0 108 | calibration[1, 2] = VIEW_HEIGHT / 2.0 109 | calibration[0, 0] = calibration[1, 1] = VIEW_WIDTH / (2.0 * np.tan(VIEW_FOV * np.pi / 360.0)) 110 | return calibration 111 | 112 | ### Extract bounding box vertices of vehicle 113 | def create_bb_points(vehicle): 114 | cords = np.zeros((8, 4)) 115 | extent = vehicle.bounding_box.extent 116 | cords[0, :] = np.array([extent.x, extent.y, -extent.z, 1]) 117 | cords[1, :] = np.array([-extent.x, extent.y, -extent.z, 1]) 118 | cords[2, :] = np.array([-extent.x, -extent.y, -extent.z, 1]) 119 | cords[3, :] = np.array([extent.x, -extent.y, -extent.z, 1]) 120 | cords[4, :] = np.array([extent.x, extent.y, extent.z, 1]) 121 | cords[5, :] = np.array([-extent.x, extent.y, extent.z, 1]) 122 | cords[6, :] = np.array([-extent.x, -extent.y, extent.z, 1]) 123 | cords[7, :] = np.array([extent.x, -extent.y, extent.z, 1]) 124 | return cords 125 | 126 | ### Get transformation matrix from carla.Transform object 127 | def get_matrix(transform): 128 | rotation = transform.rotation 129 | location = transform.location 130 | c_y = np.cos(np.radians(rotation.yaw)) 131 | s_y = np.sin(np.radians(rotation.yaw)) 132 | c_r = np.cos(np.radians(rotation.roll)) 133 | s_r = np.sin(np.radians(rotation.roll)) 134 | c_p = np.cos(np.radians(rotation.pitch)) 135 | s_p = np.sin(np.radians(rotation.pitch)) 136 | matrix = np.matrix(np.identity(4)) 137 | matrix[0, 3] = location.x 138 | matrix[1, 3] = location.y 139 | matrix[2, 3] = location.z 140 | matrix[0, 0] = c_p * c_y 141 | matrix[0, 1] = c_y * s_p * s_r - s_y * c_r 142 | matrix[0, 2] = -c_y * s_p * c_r - s_y * s_r 143 | matrix[1, 0] = s_y * c_p 144 | matrix[1, 1] = s_y * s_p * s_r + c_y * c_r 145 | matrix[1, 2] = -s_y * s_p * c_r + c_y * s_r 146 | matrix[2, 0] = s_p 147 | matrix[2, 1] = -c_p * s_r 148 | matrix[2, 2] = c_p * c_r 149 | return matrix 150 | 151 | ### Transform coordinate from vehicle reference to world reference 152 | def vehicle_to_world(cords, vehicle, hasWorldCoords=False): 153 | bb_transform = carla.Transform(vehicle.bounding_box.location) 154 | bb_vehicle_matrix = get_matrix(bb_transform) 155 | if hasWorldCoords: 156 | v_trans = carla.Transform(vehicle.loc_given, vehicle.get_transform().rotation) # I set it to the given location but keep the rotation 157 | else: 158 | v_trans = vehicle.get_transform() 159 | vehicle_world_matrix = get_matrix(v_trans) 160 | bb_world_matrix = np.dot(vehicle_world_matrix, bb_vehicle_matrix) 161 | world_cords = np.dot(bb_world_matrix, np.transpose(cords)) 162 | return world_cords 163 | 164 | ### Transform coordinate from world reference to sensor reference 165 | def world_to_sensor(cords, sensor): 166 | sensor_world_matrix = get_matrix(sensor.get_transform()) 167 | world_sensor_matrix = np.linalg.inv(sensor_world_matrix) 168 | sensor_cords = np.dot(world_sensor_matrix, cords) 169 | return sensor_cords 170 | 171 | ### Transform coordinate from vehicle reference to sensor reference 172 | def vehicle_to_sensor(cords, vehicle, sensor, hasWorldCoords = False): 173 | world_cord = vehicle_to_world(cords, vehicle, hasWorldCoords) 174 | sensor_cord = world_to_sensor(world_cord, sensor) 175 | return sensor_cord 176 | 177 | ### Summarize bounding box creation and project the poins in sensor image 178 | def get_bounding_box(vehicle, sensor, hasWorldCoords=False): 179 | camera_k_matrix = get_camera_intrinsic(sensor) 180 | bb_cords = create_bb_points(vehicle) 181 | cords_x_y_z = vehicle_to_sensor(bb_cords, vehicle, sensor, hasWorldCoords)[:3, :] 182 | # Trying to see if not negating the z axis (which I already account for) works better 183 | if hasWorldCoords: 184 | cords_y_minus_z_x = np.concatenate([cords_x_y_z[1, :], cords_x_y_z[2, :], cords_x_y_z[0, :]]) 185 | else: 186 | cords_y_minus_z_x = np.concatenate([cords_x_y_z[1, :], -cords_x_y_z[2, :], cords_x_y_z[0, :]]) 187 | bbox = np.transpose(np.dot(camera_k_matrix, cords_y_minus_z_x)) 188 | camera_bbox = np.concatenate([bbox[:, 0] / bbox[:, 2], bbox[:, 1] / bbox[:, 2], bbox[:, 2]], axis=1) 189 | return camera_bbox 190 | 191 | ### Draw 2D bounding box (4 vertices) from 3D bounding box (8 vertices) in image 192 | ### 2D bounding box is represented by two corner points 193 | def p3d_to_p2d_bb(p3d_bb): 194 | min_x = np.amin(p3d_bb[:,0]) 195 | min_y = np.amin(p3d_bb[:,1]) 196 | max_x = np.amax(p3d_bb[:,0]) 197 | max_y = np.amax(p3d_bb[:,1]) 198 | p2d_bb = np.array([[min_x,min_y] , [max_x,max_y]]) 199 | return p2d_bb 200 | 201 | ### Summarize 2D bounding box creation 202 | def get_2d_bb(vehicle, sensor, hasWorldCoords=False): 203 | p3d_bb = get_bounding_box(vehicle, sensor, hasWorldCoords) 204 | p2d_bb = p3d_to_p2d_bb(p3d_bb) 205 | return p2d_bb 206 | 207 | ### Use these functions to remove invisible vehicles 208 | ### Get numpy 2D array of vehicles' location and rotation from world reference, also locations from sensor reference 209 | def get_list_transform(vehicles_list, sensor): 210 | t_list = [] 211 | for vehicle in vehicles_list: 212 | v = vehicle.get_transform() 213 | transform = [v.location.x , v.location.y , v.location.z , v.rotation.roll , v.rotation.pitch , v.rotation.yaw] 214 | t_list.append(transform) 215 | t_list = np.array(t_list).reshape((len(t_list),6)) 216 | 217 | transform_h = np.concatenate((t_list[:,:3],np.ones((len(t_list),1))),axis=1) 218 | sensor_world_matrix = get_matrix(sensor.get_transform()) 219 | world_sensor_matrix = np.linalg.inv(sensor_world_matrix) 220 | transform_s = np.dot(world_sensor_matrix, transform_h.T).T 221 | 222 | return t_list , transform_s 223 | 224 | ### Remove vehicles that are not in the FOV of the sensor 225 | def filter_angle(vehicles_list, v_transform, v_transform_s, sensor): 226 | attr_dict = sensor.attributes 227 | VIEW_FOV = float(attr_dict['fov']) 228 | v_angle = np.arctan2(v_transform_s[:,1],v_transform_s[:,0]) * 180 / np.pi 229 | 230 | selector = np.array(np.absolute(v_angle) < (int(VIEW_FOV)/2)) 231 | vehicles_list_f = [v for v, s in zip(vehicles_list, selector) if s] 232 | v_transform_f = v_transform[selector[:,0],:] 233 | v_transform_s_f = v_transform_s[selector[:,0],:] 234 | return vehicles_list_f , v_transform_f , v_transform_s_f 235 | 236 | ### Remove vehicles that have distance > max_dist from the sensor 237 | def filter_distance(vehicles_list, v_transform, v_transform_s, sensor, max_dist=100): 238 | s = sensor.get_transform() 239 | s_transform = np.array([s.location.x , s.location.y , s.location.z]) 240 | dist2 = np.sum(np.square(v_transform[:,:3] - s_transform), axis=1) 241 | selector = dist2 < (max_dist**2) 242 | vehicles_list_f = [v for v, s in zip(vehicles_list, selector) if s] 243 | v_transform_f = v_transform[selector,:] 244 | v_transform_s_f = v_transform_s[selector,:] 245 | return vehicles_list_f , v_transform_f , v_transform_s_f 246 | 247 | ### Remove vehicles that are occluded from the sensor view based on one point depth measurement 248 | ### NOT USED by default because of the unstable result 249 | def filter_occlusion_1p(vehicles_list, v_transform, v_transform_s, sensor, depth_img, depth_margin=2.0): 250 | camera_k_matrix = get_camera_intrinsic(sensor) 251 | CAM_W = int(sensor.attributes['image_size_x']) 252 | CAM_H = int(sensor.attributes['image_size_y']) 253 | pos_x_y_z = v_transform_s.T 254 | pos_y_minus_z_x = np.concatenate([pos_x_y_z[1, :], -pos_x_y_z[2, :]-0.0, pos_x_y_z[0, :]]) 255 | img_pos = np.transpose(np.dot(camera_k_matrix, pos_y_minus_z_x)) 256 | camera_pos = np.concatenate([img_pos[:, 0] / img_pos[:, 2], img_pos[:, 1] / img_pos[:, 2], img_pos[:, 2]], axis=1) 257 | 258 | u_arr = np.array(camera_pos[:,0]).flatten() 259 | v_arr = np.array(camera_pos[:,1]).flatten() 260 | dist = np.array(v_transform_s[:,0]).flatten() 261 | 262 | depth_patches = [] 263 | v_depth = [] 264 | for u, v in zip(list(u_arr),list(v_arr)): 265 | if u<=CAM_W and v<=CAM_H: 266 | v_depth.append(depth_img[int(v),int(u)]) 267 | depth_a = np.array([[int(u)-3,int(v)-3] , [int(u)+3,int(v)+3]]) 268 | depth_patches.append(depth_a) 269 | else: 270 | v_depth.append(0) 271 | v_depth = np.array(v_depth) 272 | 273 | selector = (dist-v_depth) < depth_margin 274 | vehicles_list_f = [v for v, s in zip(vehicles_list, selector) if s] 275 | v_transform_f = v_transform[selector,:] 276 | v_transform_s_f = v_transform_s[selector,:] 277 | return vehicles_list_f , v_transform_f , v_transform_s_f, depth_patches 278 | 279 | ### Apply angle and distance filters in one function 280 | def filter_angle_distance(vehicles_list, sensor, max_dist=100): 281 | vehicles_transform , vehicles_transform_s = get_list_transform(vehicles_list, sensor) 282 | vehicles_list , vehicles_transform , vehicles_transform_s = filter_distance(vehicles_list, vehicles_transform, vehicles_transform_s, sensor, max_dist) 283 | vehicles_list , vehicles_transform , vehicles_transform_s = filter_angle(vehicles_list, vehicles_transform, vehicles_transform_s, sensor) 284 | return vehicles_list 285 | 286 | ### Apply occlusion filter based on resized bounding box depth values 287 | def filter_occlusion_bbox(bounding_boxes, vehicles, sensor, depth_img, v_class=None, depth_capture=False, depth_margin=-1, patch_ratio=0.5, resize_ratio=0.5): 288 | filtered_bboxes = [] 289 | filtered_vehicles = [] 290 | filtered_v_class = [] 291 | filtered_out = {} 292 | removed_bboxes = [] 293 | removed_vehicles = [] 294 | removed_v_class = [] 295 | removed_out = {} 296 | selector = [] 297 | patches = [] 298 | patch_delta = [] 299 | _, v_transform_s = get_list_transform(vehicles, sensor) 300 | 301 | for v, vs, bbox in zip(vehicles,v_transform_s,bounding_boxes): 302 | dist = vs[:,0] 303 | if depth_margin < 0: 304 | depth_margin = (v.bounding_box.extent.x**2+v.bounding_box.extent.y**2)**0.5 + 0.25 305 | uc = int((bbox[0,0]+bbox[1,0])/2) 306 | vc = int((bbox[0,1]+bbox[1,1])/2) 307 | wp = int((bbox[1,0]-bbox[0,0])*resize_ratio/2) 308 | hp = int((bbox[1,1]-bbox[0,1])*resize_ratio/2) 309 | u1 = uc-wp 310 | u2 = uc+wp 311 | v1 = vc-hp 312 | v2 = vc+hp 313 | depth_patch = np.array(depth_img[v1:v2,u1:u2]) 314 | dist_delta = dist-depth_patch 315 | s_patch = np.array(dist_delta < depth_margin) 316 | s = np.sum(s_patch) > s_patch.shape[0]*patch_ratio 317 | selector.append(s) 318 | patches.append(np.array([[u1,v1],[u2,v2]])) 319 | patch_delta.append(dist_delta) 320 | 321 | for bbox,v,s in zip(bounding_boxes,vehicles,selector): 322 | if s: 323 | filtered_bboxes.append(bbox) 324 | filtered_vehicles.append(v) 325 | else: 326 | removed_bboxes.append(bbox) 327 | removed_vehicles.append(v) 328 | filtered_out['bbox']=filtered_bboxes 329 | filtered_out['vehicles']=filtered_vehicles 330 | removed_out['bbox']=removed_bboxes 331 | removed_out['vehicles']=removed_vehicles 332 | 333 | if v_class is not None: 334 | for cls,s in zip(v_class,selector): 335 | if s: 336 | filtered_v_class.append(cls) 337 | else: 338 | removed_v_class.append(cls) 339 | filtered_out['class']=filtered_v_class 340 | removed_out['class']=removed_v_class 341 | 342 | if depth_capture: 343 | depth_debug(patches, depth_img, sensor) 344 | for i,matrix in enumerate(patch_delta): 345 | print("\nvehicle "+ str(i) +": \n" + str(matrix)) 346 | depth_capture = False 347 | 348 | return filtered_out, removed_out, patches, depth_capture 349 | 350 | ### Display area in depth image where measurement values are taken 351 | def depth_debug(depth_patches, depth_img, sensor): 352 | CAM_W = int(sensor.attributes['image_size_x']) 353 | CAM_H = int(sensor.attributes['image_size_y']) 354 | #depth_img = depth_img/1000*255 355 | depth_img = np.log10(depth_img) 356 | depth_img = depth_img*255/4 357 | depth_img 358 | depth_3ch = np.zeros((CAM_H,CAM_W,3)) 359 | depth_3ch[:,:,0] = depth_3ch[:,:,1] = depth_3ch[:,:,2] = depth_img 360 | depth_3ch = np.uint8(depth_3ch) 361 | image = Image.fromarray(depth_3ch, 'RGB') 362 | img_draw = ImageDraw.Draw(image) 363 | for crop in depth_patches: 364 | u1 = int(crop[0,0]) 365 | v1 = int(crop[0,1]) 366 | u2 = int(crop[1,0]) 367 | v2 = int(crop[1,1]) 368 | crop_bbox = [(u1,v1),(u2,v2)] 369 | img_draw.rectangle(crop_bbox, outline ="red") 370 | image.show() 371 | 372 | ### Filter out lidar points that are outside camera FOV 373 | def filter_lidar(lidar_data, camera, max_dist): 374 | CAM_W = int(camera.attributes['image_size_x']) 375 | CAM_H = int(camera.attributes['image_size_y']) 376 | CAM_HFOV = float(camera.attributes['fov']) 377 | CAM_VFOV = np.rad2deg(2*np.arctan(np.tan(np.deg2rad(CAM_HFOV/2))*CAM_H/CAM_W)) 378 | lidar_points = np.array([[p.point.y,-p.point.z,p.point.x] for p in lidar_data]) 379 | 380 | dist2 = np.sum(np.square(lidar_points), axis=1).reshape((-1)) 381 | p_angle_h = np.absolute(np.arctan2(lidar_points[:,0],lidar_points[:,2]) * 180 / np.pi).reshape((-1)) 382 | p_angle_v = np.absolute(np.arctan2(lidar_points[:,1],lidar_points[:,2]) * 180 / np.pi).reshape((-1)) 383 | 384 | selector = np.array(np.logical_and(dist2 < (max_dist**2), np.logical_and(p_angle_h < (CAM_HFOV/2), p_angle_v < (CAM_VFOV/2)))) 385 | filtered_lidar = [pt for pt, s in zip(lidar_data, selector) if s] 386 | return filtered_lidar 387 | 388 | ### Save camera image with projected lidar points for debugging purpose 389 | def show_lidar(lidar_data, camera, carla_img, path=''): 390 | lidar_np = np.array([[p.point.y,-p.point.z,p.point.x] for p in lidar_data]) 391 | cam_k = get_camera_intrinsic(camera) 392 | 393 | # Project LIDAR 3D to Camera 2D 394 | lidar_2d = np.transpose(np.dot(cam_k,np.transpose(lidar_np))) 395 | lidar_2d = (lidar_2d/lidar_2d[:,2].reshape((-1,1))).astype(int) 396 | 397 | # Visualize the result 398 | c_scale = [] 399 | for pts in lidar_data: 400 | if pts.object_idx == 0: c_scale.append(255) 401 | else: c_scale.append(0) 402 | 403 | carla_img.convert(carla.ColorConverter.Raw) 404 | img_bgra = np.array(carla_img.raw_data).reshape((carla_img.height,carla_img.width,4)) 405 | img_rgb = np.zeros((carla_img.height,carla_img.width,3)) 406 | img_rgb[:,:,0] = img_bgra[:,:,2] 407 | img_rgb[:,:,1] = img_bgra[:,:,1] 408 | img_rgb[:,:,2] = img_bgra[:,:,0] 409 | img_rgb = np.uint8(img_rgb) 410 | 411 | for p,c in zip(lidar_2d,c_scale): 412 | c = int(c) 413 | cv2.circle(img_rgb,tuple(p[:2]),1,(c,c,c),-1) 414 | filename = path + 'out_lidar_img/%06d.jpg' % carla_img.frame 415 | if not os.path.exists(os.path.dirname(filename)): 416 | os.makedirs(os.path.dirname(filename)) 417 | img_rgb = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2RGB) 418 | cv2.imwrite(filename, img_rgb) 419 | 420 | #Identical to show_lidar but with a different path and color method 421 | def show_sem_lidar(lidar_data, camera, carla_img, path = ''): 422 | color_conversion = {'0': (0, 0, 0), '1': (70, 70, 70), '2': (100, 40, 40), '3': (55, 90, 80), '4': (220, 20, 60), '5': (153, 153, 153), '6': (157, 234, 50), '7': (128, 64, 128), '8': (244, 35, 232), '9': (107, 142, 35), '10': (0, 0, 142), '11': (102, 102, 156), '12': (220, 220, 0), '13': (70, 130, 180), '14': (81, 0, 81), '15': (150, 100, 100), '16': (230, 150, 140), '17': (180, 165, 180), '18': (250, 170, 30), '19': (110, 190, 160), '20': (170, 120, 50), '21': (45, 60, 150), '22': (145, 170, 100)} 423 | lidar_np = np.array([[p.point.y,-p.point.z,p.point.x] for p in lidar_data]) 424 | cam_k = get_camera_intrinsic(camera) 425 | 426 | # Project LIDAR 3D to Camera 2D 427 | lidar_2d = np.transpose(np.dot(cam_k,np.transpose(lidar_np))) 428 | lidar_2d = (lidar_2d/lidar_2d[:,2].reshape((-1,1))).astype(int) 429 | 430 | # Visualize the result 431 | c_scale = [] 432 | for pts in lidar_data: 433 | c_scale.append(color_conversion[str(pts.object_tag)]) 434 | 435 | carla_img.convert(carla.ColorConverter.Raw) 436 | img_bgra = np.array(carla_img.raw_data).reshape((carla_img.height,carla_img.width,4)) 437 | img_rgb = np.zeros((carla_img.height,carla_img.width,3)) 438 | img_rgb[:,:,0] = img_bgra[:,:,2] 439 | img_rgb[:,:,1] = img_bgra[:,:,1] 440 | img_rgb[:,:,2] = img_bgra[:,:,0] 441 | img_rgb = np.uint8(img_rgb) 442 | 443 | for p,c in zip(lidar_2d,c_scale): 444 | cv2.circle(img_rgb,tuple(p[:2]),1,c,-1) 445 | filename = path + 'out_lidar_sem_img/%06d.jpg' % carla_img.frame 446 | if not os.path.exists(os.path.dirname(filename)): 447 | os.makedirs(os.path.dirname(filename)) 448 | img_rgb = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2RGB) 449 | cv2.imwrite(filename, img_rgb) 450 | 451 | ### Add actor ID of the vehcile hit by the lidar points 452 | ### Only used before the object_id issue of semantic lidar solved 453 | def get_points_id(lidar_points, vehicles, camera, max_dist): 454 | vehicles_f = filter_angle_distance(vehicles, camera, max_dist) 455 | fixed_lidar_points = [] 456 | for p in lidar_points: 457 | sensor_world_matrix = get_matrix(camera.get_transform()) 458 | pw = np.dot(sensor_world_matrix, [[p.point.x],[p.point.y],[p.point.z],[1]]) 459 | pw = carla.Location(pw[0,0],pw[1,0],pw[2,0]) 460 | for v in vehicles_f: 461 | if v.bounding_box.contains(pw, v.get_transform()): 462 | p.object_idx = v.id 463 | break 464 | fixed_lidar_points.append(p) 465 | return fixed_lidar_points 466 | 467 | 468 | ### Use this function to save just the rgb image (with and without bounding box) in a specified path format 469 | def save_output_img(carla_img, out_data, cc_rgb=carla.ColorConverter.Raw, path='', save_patched=False): 470 | # Convert class to color 471 | class_to_color_dict = {"Vehicle": (255, 0, 0), "Pedestrian": (0, 0, 255), "Traffic Sign": (220, 200, 0), "Traffic Light": (250, 170, 30)} 472 | carla_img.save_to_disk(path + 'out_rgb_raw/%06d.png' % carla_img.frame, cc_rgb) 473 | if save_patched: 474 | carla_img.convert(cc_rgb) 475 | img_bgra = np.array(carla_img.raw_data).reshape((carla_img.height,carla_img.width,4)) 476 | img_rgb = np.zeros((carla_img.height,carla_img.width,3)) 477 | 478 | img_rgb[:,:,0] = img_bgra[:,:,2] 479 | img_rgb[:,:,1] = img_bgra[:,:,1] 480 | img_rgb[:,:,2] = img_bgra[:,:,0] 481 | img_rgb = np.uint8(img_rgb) 482 | image = Image.fromarray(img_rgb, 'RGB') 483 | img_draw = ImageDraw.Draw(image) 484 | for obj in out_data.values(): 485 | crop = obj['bbox'] 486 | if obj['class'] in class_to_color_dict.keys(): 487 | color_str = class_to_color_dict[obj['class']] 488 | else: 489 | color_str = "black" 490 | u1 = int(crop[0]) 491 | v1 = int(crop[1]) 492 | u2 = int(crop[2]) 493 | v2 = int(crop[3]) 494 | crop_bbox = [(u1,v1),(u2,v2)] 495 | img_draw.rectangle(crop_bbox, outline =color_str) 496 | filename = path + 'out_rgb_bbox/%06d.png' % carla_img.frame 497 | if not os.path.exists(os.path.dirname(filename)): 498 | os.makedirs(os.path.dirname(filename)) 499 | image.save(filename) 500 | 501 | 502 | ### Use this function to convert depth image (carla.Image) to a depth map in meter 503 | def extract_depth(depth_img): 504 | depth_img.convert(carla.ColorConverter.Depth) 505 | depth_meter = np.array(depth_img.raw_data).reshape((depth_img.height,depth_img.width,4))[:,:,0] * 1000 / 255 506 | return depth_meter 507 | 508 | ### Use this function to get vehicle's snapshots that can be processed by auto_annotate() function. 509 | def snap_processing(vehiclesActor, worldSnap, veh_check=None): 510 | vehicles = [] 511 | for v in vehiclesActor: 512 | vid = v.id 513 | if veh_check is not None and v.id == veh_check: 514 | continue 515 | vsnap = worldSnap.find(vid) 516 | if vsnap is None: 517 | continue 518 | vsnap.bounding_box = v.bounding_box 519 | vsnap.type_id = v.type_id 520 | vehicles.append(vsnap) 521 | return vehicles 522 | 523 | def snap_processing_manual_bbox(ids, worldSnap, bboxes): 524 | actors = [] 525 | for v in ids: 526 | vid = v 527 | vsnap = worldSnap.find(vid) 528 | if vsnap is None: 529 | continue 530 | vsnap.bounding_box = bboxes[str(vid)][0] 531 | # Add this extra attribute "loc_given" to have the center 532 | vsnap.loc_given = bboxes[str(vid)][1] 533 | vsnap.type_id = "None" # We handle classes differently, we use this so we can handle static non-id'd objects like traffic lights 534 | actors.append(vsnap) 535 | return actors 536 | -------------------------------------------------------------------------------- /collect_dt_data.py: -------------------------------------------------------------------------------- 1 | ### CARLA Simulator is licensed under the terms of the MIT license 2 | ### For a copy, see 3 | ### For more information about CARLA Simulator, visit https://carla.org/ 4 | 5 | import glob 6 | import os 7 | import sys 8 | import time 9 | from datetime import datetime 10 | import csv 11 | import random 12 | import traceback 13 | 14 | try: 15 | print("Carla path: ") 16 | print(os.path.abspath('../carla/dist')) 17 | print("Version: ") 18 | print(sys.version_info.major) 19 | print(sys.version_info.minor) 20 | sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % ( 21 | sys.version_info.major, 22 | sys.version_info.minor, 23 | 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0]) 24 | except IndexError: 25 | try: 26 | # Get the other version 27 | sys.path.append(glob.glob('../carla/dist/carla-0.9.11-py3.7-%s.egg' % ( 28 | 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0]) 29 | except IndexError: 30 | print('carla not found') 31 | pass 32 | 33 | import carla 34 | import argparse 35 | import logging 36 | import random 37 | import queue 38 | import numpy as np 39 | from matplotlib import pyplot as plt 40 | import cv2 41 | import carla_vehicle_annotator as cva 42 | import math 43 | 44 | 45 | def rotate_point_matrix(yaw, pitch, roll, point_arr): 46 | yaw_matrix = np.array([[math.cos(yaw), -1 * math.sin(yaw), 0], [math.sin(yaw), math.cos(yaw), 0], [0, 0, 1]]) 47 | pitch_matrix = np.array([[math.cos(pitch), 0, math.sin(pitch)], [0, 1, 0], [-1 * math.sin(pitch), 0, math.cos(pitch)]]) 48 | roll_matrix = np.array([[1, 0, 0], [0, math.cos(roll), math.sin(-1 * roll)], [0, math.sin(roll), math.cos(roll)]]) 49 | r = np.matmul(yaw_matrix, np.matmul(pitch_matrix, roll_matrix)) 50 | # Now we use this to compute the resultant point 51 | point_matrix = np.array([[float(point_arr[0])], [float(point_arr[1])], [float(point_arr[2])]]) 52 | # print("point matrix: ") 53 | # print(point_matrix) 54 | resulting_point = np.matmul(r, point_matrix) 55 | return [resulting_point[0][0], resulting_point[1][0], resulting_point[2][0]] 56 | 57 | def point_cloud_to_3d_bbox(semantic_lidar_measurement, sought_semantic_tags, debug_filename = '', debug = False): 58 | sought_object_ids = [] 59 | # First, go through points and find all of the "sought object" ids 60 | if debug: 61 | if not os.path.exists(os.path.dirname(debug_filename)): 62 | os.makedirs(os.path.dirname(debug_filename)) 63 | # 64 | debug_file = open(debug_filename, "w") 65 | debug_file.write("-------------------") 66 | debug_file.write("Seeking tags:\n") 67 | debug_file.write(', '.join([str(elem) for elem in sought_semantic_tags])) 68 | debug_file.write("\n") 69 | # debug_file.write("Semantic lidar measurements:\n") 70 | # debug_file.write(', '.join([str(detection) for detection in semantic_lidar_measurement])) 71 | debug_file.write("\n") 72 | debug_file.write("Raycast IDS:\n") 73 | for detection in semantic_lidar_measurement: 74 | if detection.object_idx not in sought_object_ids and detection.object_tag in sought_semantic_tags: 75 | if debug: 76 | debug_file.write("Object " + str(detection.object_idx) + " found with tag " + str(detection.object_tag) + "\n") 77 | sought_object_ids.append(detection.object_idx) 78 | else: 79 | if debug: 80 | debug_file.write("Detection not notable: object " + str(detection.object_idx) + " and tag " + str(detection.object_tag) + "\n") 81 | else: 82 | pass 83 | 84 | # Now we go through all the points again, getting the largest and smallest values in each axis to get 8 coordinates total, one for each corner 85 | sought_3d_bboxes = {} 86 | curr_3d_extents = {} 87 | for obj in sought_object_ids: 88 | curr_3d_extents[str(obj)] = {'x_min': None, 'x_max': None, 'y_min': None, 'y_max': None, 'z_min': None, 'z_max': None} 89 | for detection in semantic_lidar_measurement: 90 | if detection.object_idx in sought_object_ids and detection.object_tag in sought_semantic_tags: # Second condition prevents getting pole of same object 91 | # curpoint = detection.point 92 | lidar_location = semantic_lidar_measurement.transform.location 93 | lidar_rotation = semantic_lidar_measurement.transform.rotation 94 | point_arr = [detection.point.x, detection.point.y, -1 * detection.point.z] 95 | # Convert via rotation 96 | point_arr = rotate_point_matrix(math.radians(lidar_rotation.yaw), math.radians(lidar_rotation.pitch), math.radians(lidar_rotation.roll), point_arr) 97 | # print("LIDAR LOCATION (LOOKING AT OBJECT ID " + str(detection.object_idx) + ")") 98 | # print(lidar_location) 99 | # print("LIDAR ROTATION") 100 | # print(lidar_rotation) 101 | # print("DETECTION POINT COORDINATES") 102 | # print(detection.point) 103 | # print("CONVERTED COORDINATES:") 104 | # print(point_arr) 105 | curpoint = [lidar_location.x + point_arr[0], lidar_location.y + point_arr[1], lidar_location.z + point_arr[2]] 106 | # curpoint = [(-1 * detection.point.y) + lidar_location.x, detection.point.x + lidar_location.y, (-1 * detection.point.z) + lidar_location.z] 107 | # curpoint = transform_lidar_point(lidar_transform, point) 108 | # print("Curpoint:") 109 | # print(curpoint) 110 | if curr_3d_extents[str(detection.object_idx)]['x_min'] is None or curr_3d_extents[str(detection.object_idx)]['x_min'] > curpoint[0]: 111 | curr_3d_extents[str(detection.object_idx)]['x_min'] = curpoint[0] 112 | if curr_3d_extents[str(detection.object_idx)]['x_max'] is None or curr_3d_extents[str(detection.object_idx)]['x_max'] < curpoint[0]: 113 | curr_3d_extents[str(detection.object_idx)]['x_max'] = curpoint[0] 114 | if curr_3d_extents[str(detection.object_idx)]['y_min'] is None or curr_3d_extents[str(detection.object_idx)]['y_min'] > curpoint[1]: 115 | curr_3d_extents[str(detection.object_idx)]['y_min'] = curpoint[1] 116 | if curr_3d_extents[str(detection.object_idx)]['y_max'] is None or curr_3d_extents[str(detection.object_idx)]['y_max'] < curpoint[1]: 117 | curr_3d_extents[str(detection.object_idx)]['y_max'] = curpoint[1] 118 | if curr_3d_extents[str(detection.object_idx)]['z_min'] is None or curr_3d_extents[str(detection.object_idx)]['z_min'] > curpoint[2]: 119 | curr_3d_extents[str(detection.object_idx)]['z_min'] = curpoint[2] 120 | if curr_3d_extents[str(detection.object_idx)]['z_max'] is None or curr_3d_extents[str(detection.object_idx)]['z_max'] < curpoint[2]: 121 | curr_3d_extents[str(detection.object_idx)]['z_max'] = curpoint[2] 122 | 123 | # Now that everything has an extent, we can convert that into a bounding box 124 | for obj in sought_object_ids: 125 | extents = curr_3d_extents[str(obj)] 126 | # print("Extents for obj " + str(obj)) 127 | # print(extents) 128 | bbox_center = [(extents['x_max'] + extents['x_min'])/2, (extents['y_max'] + extents['y_min'])/2, (extents['z_max'] + extents['z_min'])/2] 129 | bbox_extent = [abs(extents['x_max'] - extents['x_min'])/2, abs(extents['y_max'] - extents['y_min'])/2, abs(extents['z_max'] - extents['z_min'])/2] 130 | cent = carla.Location(bbox_center[0], bbox_center[1], bbox_center[2]) 131 | ext = carla.Vector3D(bbox_extent[0], bbox_extent[1], bbox_extent[2]) 132 | # We now set the center location to 0 because the bounding box calculation expects a point RELATIVE to the object it's attached to 133 | rel_cent = carla.Location(0, 0, 0) 134 | bbox = carla.BoundingBox(rel_cent, ext) 135 | sought_3d_bboxes[str(obj)] = [bbox, cent] 136 | 137 | if debug: 138 | debug_file.write("Returning sought boxes: \n") 139 | for key, value in sought_3d_bboxes.items(): 140 | debug_file.write(key + "\n") 141 | # debug_file.write(sought_3d_bboxes) 142 | debug_file.close() 143 | return sought_3d_bboxes 144 | 145 | 146 | def retrieve_data(sensor_queue, frame, timeout=0.2): # Set to 1/5th of a second 147 | while True: 148 | try: 149 | data = sensor_queue.get(True,timeout) 150 | except queue.Empty: 151 | return None 152 | if data.frame == frame: 153 | return data 154 | 155 | 156 | def main(): 157 | 158 | argparser = argparse.ArgumentParser( 159 | description=__doc__) 160 | argparser.add_argument( 161 | '--host', 162 | metavar='H', 163 | default='127.0.0.1', 164 | help='IP of the host server (default: 127.0.0.1)') 165 | argparser.add_argument( 166 | '-p', '--port', 167 | metavar='P', 168 | default=2000, 169 | type=int, 170 | help='TCP port to listen to (default: 2000)') 171 | argparser.add_argument( 172 | '-n', '--number-of-vehicles', 173 | metavar='N', 174 | default=100, 175 | type=int, 176 | help='number of vehicles (default: 100)') 177 | argparser.add_argument( 178 | '-w', '--number-of-walkers', 179 | metavar='W', 180 | default=50, 181 | type=int, 182 | help='number of walkers (default: 50)') 183 | 184 | argparser.add_argument( 185 | '-tm_p', '--tm_port', 186 | metavar='P', 187 | default=8000, 188 | type=int, 189 | help='port to communicate with TM (default: 8000)') 190 | 191 | argparser.add_argument( 192 | '-o', '--out', 193 | metavar='O', 194 | default=os.getcwd() + '/output/' + datetime.now().strftime('%Y-%m-%d_%H-%M-%S') + '/', 195 | help='Default output location') 196 | 197 | argparser.add_argument( 198 | '--lidar', 199 | metavar='L', 200 | default='n', 201 | help='Y/N dump lidar data.') 202 | 203 | argparser.add_argument( 204 | '--fps', 205 | metavar='F', 206 | default=5, 207 | help='FPS to run data collection at.') 208 | 209 | argparser.add_argument( 210 | '--time_limit', 211 | metavar='T', 212 | default=200, 213 | help='How many seconds each data collection should run before the simulation is reset.') 214 | 215 | args = argparser.parse_args() 216 | 217 | client = carla.Client(args.host, args.port) 218 | client.set_timeout(10.0) 219 | 220 | fps = int(args.fps) 221 | time_limit = int(args.time_limit) # How many seconds before we re-initialize the whole thing 222 | segment_num = 100 # How many runs to record before we end 223 | target_segment_limit = 1000 # How many we should go up to 224 | # So we run for each map type 225 | 226 | map_types = ["residential", "highway"] 227 | args_out_og = args.out 228 | args_out_og2 = args.out 229 | try: 230 | for j in range(0, int(target_segment_limit/segment_num)): 231 | args_out_og = args_out_og2 + "/" + str(j) + "/" 232 | for mtype in map_types: 233 | print("Map type: " + mtype) 234 | for i in range(0, segment_num): 235 | args.out = args_out_og + mtype + "_" + str(i) + "/" 236 | print("Running segment " + str(i+1) + "/" + str(segment_num)) 237 | run_simulation_instance(client, args, mtype, fps, time_limit) 238 | except Exception: 239 | traceback.print_exc(file=sys.stdout) 240 | 241 | 242 | def run_simulation_instance(client, args, map_type, fps, time_limit): 243 | vehicles_list = [] 244 | walkers_list = [] 245 | nonvehicles_list = [] 246 | all_id = [] 247 | 248 | sim_start_time = None 249 | 250 | #Multiplier for channels/number of points in semantic lidar. Higher values mean higher resolution, but slightly longer runtime. Recommended to leave at default unless you know what you're doing. 251 | sem_res = 1 252 | 253 | try: 254 | residential_maps = ['/Game/Carla/Maps/Town01_Opt', '/Game/Carla/Maps/Town02_Opt', '/Game/Carla/Maps/Town03_Opt', '/Game/Carla/Maps/Town05_Opt', '/Game/Carla/Maps/Town06_Opt', '/Game/Carla/Maps/Town07_Opt'] 255 | highway_maps = ['/Game/Carla/Maps/Town04_Opt', '/Game/Carla/Maps/Town06_Opt'] 256 | start_time = time.time() 257 | traffic_manager = client.get_trafficmanager(args.tm_port) 258 | traffic_manager.set_global_distance_to_leading_vehicle(2.0) 259 | chosen_map = None 260 | if map_type == "residential": 261 | chosen_map = random.choice(residential_maps) 262 | elif map_type == "highway": 263 | chosen_map = random.choice(highway_maps) 264 | else: 265 | maps = client.get_available_maps() 266 | chosen_map = random.choice(maps) 267 | world = client.load_world(chosen_map) 268 | 269 | # We choose the weather and time of day from a list of available weather presets 270 | # You can also set weather manually if you so choose. This also allows you to set a specific distribution and time of day. See documentation: 271 | # https://carla.readthedocs.io/en/0.9.3/python_api_tutorial/#changing-the-weather 272 | 273 | weather_presets = [carla.WeatherParameters.ClearNoon, carla.WeatherParameters.CloudyNoon, carla.WeatherParameters.WetNoon, carla.WeatherParameters.WetCloudyNoon, carla.WeatherParameters.MidRainyNoon, carla.WeatherParameters.HardRainNoon, carla.WeatherParameters.SoftRainNoon, carla.WeatherParameters.ClearSunset, carla.WeatherParameters.CloudySunset, carla.WeatherParameters.WetSunset, carla.WeatherParameters.WetCloudySunset, carla.WeatherParameters.MidRainSunset, carla.WeatherParameters.HardRainSunset, carla.WeatherParameters.SoftRainSunset] 274 | chosen_weather = random.choice(weather_presets) 275 | world.set_weather(chosen_weather) 276 | print("Weather: " + str(chosen_weather)) 277 | 278 | print('\nRUNNING in synchronous mode\n') 279 | settings = world.get_settings() 280 | traffic_manager.set_synchronous_mode(True) 281 | if not settings.synchronous_mode: 282 | synchronous_master = True 283 | settings.synchronous_mode = True 284 | settings.fixed_delta_seconds = 0.05 # We'll keep this at a 20 FPS rate, just get sensor readings at a different time 285 | world.apply_settings(settings) 286 | else: 287 | synchronous_master = False 288 | 289 | blueprints = world.get_blueprint_library().filter('vehicle.*') 290 | blueprints_p = world.get_blueprint_library().filter('walker.*') 291 | 292 | spawn_points = world.get_map().get_spawn_points() 293 | number_of_spawn_points = len(spawn_points) 294 | 295 | # Spawn ego vehicle 296 | ego_bp = random.choice(blueprints) 297 | ego_transform = random.choice(spawn_points) 298 | ego_vehicle = world.spawn_actor(ego_bp, ego_transform) 299 | vehicles_list.append(ego_vehicle) 300 | ego_vehicle.set_autopilot(True) 301 | print('Ego-vehicle ready') 302 | 303 | spawn_points.remove(ego_transform) 304 | number_of_spawn_points = len(spawn_points) 305 | 306 | if args.number_of_vehicles < number_of_spawn_points: 307 | random.shuffle(spawn_points) 308 | elif args.number_of_vehicles > number_of_spawn_points: 309 | msg = 'Requested %d vehicles, but could only find %d spawn points' 310 | logging.warning(msg, args.number_of_vehicles, number_of_spawn_points) 311 | args.number_of_vehicles = number_of_spawn_points 312 | # if ratio_num_vehicles < 1: 313 | # args.number_of_vehicles = number_of_spawn_points * ratio_num_vehicles 314 | # args.number_of_pedestrians = number_of_spawn_points - args.number_of_vehicles 315 | # else: 316 | # args.number_of_pedestrians = number_of_spawn_points * (1/ratio_num_vehicles) 317 | # args.number_of_vehicles = number_of_spawn_points - args.number_of_pedestrians 318 | 319 | SpawnActor = carla.command.SpawnActor 320 | SetAutopilot = carla.command.SetAutopilot 321 | FutureActor = carla.command.FutureActor 322 | 323 | # -------------- 324 | # Spawn vehicles 325 | # -------------- 326 | batch = [] 327 | for n, transform in enumerate(spawn_points): 328 | if n >= args.number_of_vehicles: 329 | break 330 | blueprint = random.choice(blueprints) 331 | if blueprint.has_attribute('color'): 332 | color = random.choice(blueprint.get_attribute('color').recommended_values) 333 | blueprint.set_attribute('color', color) 334 | if blueprint.has_attribute('driver_id'): 335 | driver_id = random.choice(blueprint.get_attribute('driver_id').recommended_values) 336 | blueprint.set_attribute('driver_id', driver_id) 337 | blueprint.set_attribute('role_name', 'autopilot') 338 | batch.append(SpawnActor(blueprint, transform).then(SetAutopilot(FutureActor, True))) 339 | spawn_points.pop(0) 340 | 341 | for response in client.apply_batch_sync(batch, synchronous_master): 342 | if response.error: 343 | logging.error(response.error) 344 | else: 345 | vehicles_list.append(response.actor_id) 346 | 347 | print('Created %d npc vehicles \n' % len(vehicles_list)) 348 | 349 | # -------------- 350 | # Spawn Pedestrians 351 | # -------------- 352 | # some settings 353 | percentagePedestriansRunning = 0.0 # how many pedestrians will run 354 | percentagePedestriansCrossing = 0.0 # how many pedestrians will walk through the road 355 | # 1. take all the random locations to spawn 356 | spawn_points = [] 357 | for i in range(args.number_of_walkers): 358 | spawn_point = carla.Transform() 359 | loc = world.get_random_location_from_navigation() 360 | if (loc != None): 361 | spawn_point.location = loc 362 | spawn_points.append(spawn_point) 363 | # 2. we spawn the walker object 364 | batch = [] 365 | walker_speed = [] 366 | for spawn_point in spawn_points: 367 | walker_bp = random.choice(blueprints_p) 368 | # set as not invincible 369 | if walker_bp.has_attribute('is_invincible'): 370 | walker_bp.set_attribute('is_invincible', 'false') 371 | # set the max speed 372 | if walker_bp.has_attribute('speed'): 373 | if (random.random() > percentagePedestriansRunning): 374 | # walking 375 | walker_speed.append(walker_bp.get_attribute('speed').recommended_values[1]) 376 | else: 377 | # running 378 | walker_speed.append(walker_bp.get_attribute('speed').recommended_values[2]) 379 | else: 380 | print("Walker has no speed") 381 | walker_speed.append(0.0) 382 | batch.append(SpawnActor(walker_bp, spawn_point)) 383 | results = client.apply_batch_sync(batch, True) 384 | walker_speed2 = [] 385 | for i in range(len(results)): 386 | if results[i].error: 387 | logging.error(results[i].error) 388 | else: 389 | walkers_list.append({"id": results[i].actor_id}) 390 | walker_speed2.append(walker_speed[i]) 391 | walker_speed = walker_speed2 392 | # 3. we spawn the walker controller 393 | batch = [] 394 | walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker') 395 | for i in range(len(walkers_list)): 396 | batch.append(SpawnActor(walker_controller_bp, carla.Transform(), walkers_list[i]["id"])) 397 | results = client.apply_batch_sync(batch, True) 398 | for i in range(len(results)): 399 | if results[i].error: 400 | logging.error(results[i].error) 401 | else: 402 | walkers_list[i]["con"] = results[i].actor_id 403 | # 4. we put altogether the walkers and controllers id to get the objects from their id 404 | for i in range(len(walkers_list)): 405 | all_id.append(walkers_list[i]["con"]) 406 | all_id.append(walkers_list[i]["id"]) 407 | all_actors = world.get_actors(all_id) 408 | 409 | # wait for a tick to ensure client receives the last transform of the walkers we have just created 410 | world.tick() 411 | 412 | # 5. initialize each controller and set target to walk to (list is [controler, actor, controller, actor ...]) 413 | # set how many pedestrians can cross the road 414 | world.set_pedestrians_cross_factor(percentagePedestriansCrossing) 415 | for i in range(0, len(all_id), 2): 416 | # start walker 417 | all_actors[i].start() 418 | # set walk to random point 419 | all_actors[i].go_to_location(world.get_random_location_from_navigation()) 420 | # max speed 421 | all_actors[i].set_max_speed(float(walker_speed[int(i/2)])) 422 | 423 | print('spawned %d walkers \n' % len(walkers_list)) 424 | # ----------------------------- 425 | # Spawn ego vehicle and sensors 426 | # ----------------------------- 427 | q_list = [] 428 | idx = 0 429 | 430 | tick_queue = queue.Queue() 431 | world.on_tick(tick_queue.put) 432 | q_list.append(tick_queue) 433 | tick_idx = idx 434 | idx = idx+1 435 | 436 | 437 | # Spawn RGB camera 438 | cam_transform = carla.Transform(carla.Location(x=1.5, z=2.4)) 439 | cam_bp = world.get_blueprint_library().find('sensor.camera.rgb') 440 | cam_bp.set_attribute('sensor_tick', str(1/fps)) 441 | cam = world.spawn_actor(cam_bp, cam_transform, attach_to=ego_vehicle) 442 | nonvehicles_list.append(cam) 443 | cam_queue = queue.Queue() 444 | cam.listen(cam_queue.put) 445 | q_list.append(cam_queue) 446 | cam_idx = idx 447 | idx = idx+1 448 | print('RGB camera ready') 449 | cam_sem_transform = carla.Transform(carla.Location(x=1.5, z=2.4)) 450 | cam_sem_bp = world.get_blueprint_library().find('sensor.camera.semantic_segmentation') 451 | cam_sem_bp.set_attribute('sensor_tick', str(1/fps)) 452 | cam_sem = world.spawn_actor(cam_sem_bp, cam_sem_transform, attach_to=ego_vehicle) 453 | nonvehicles_list.append(cam_sem) 454 | cam_sem_queue = queue.Queue() 455 | cam_sem.listen(cam_sem_queue.put) 456 | q_list.append(cam_sem_queue) 457 | cam_sem_idx = idx 458 | idx = idx+1 459 | 460 | # Spawn semantic segmentation camera 461 | 462 | # Spawn LIDAR sensor 463 | # We're using a dumbed-down semantic sensor so we don't have to worry about intensity 464 | # At the same time, it doesn't have as many points as the "real" semantic lidar to save some time 465 | lidar_bp = world.get_blueprint_library().find('sensor.lidar.ray_cast_semantic') 466 | lidar_bp.set_attribute('sensor_tick', str(1/fps)) 467 | lidar_bp.set_attribute('channels', '64') 468 | lidar_bp.set_attribute('points_per_second', '1120000') 469 | lidar_bp.set_attribute('upper_fov', '40') 470 | lidar_bp.set_attribute('lower_fov', '-40') 471 | lidar_bp.set_attribute('range', '100') 472 | lidar_bp.set_attribute('rotation_frequency', '20') 473 | lidar_transform = carla.Transform(carla.Location(x=1.5, z=2.4)) 474 | lidar = world.spawn_actor(lidar_bp, lidar_transform, attach_to=ego_vehicle) 475 | nonvehicles_list.append(lidar) 476 | lidar_queue = queue.Queue() 477 | lidar.listen(lidar_queue.put) 478 | q_list.append(lidar_queue) 479 | lidar_idx = idx 480 | idx = idx+1 481 | print('LIDAR ready') 482 | 483 | if args.lidar == 'y': # We spawn an addition lidar sensor for lidar collection 484 | t_lidar_bp = world.get_blueprint_library().find('sensor.lidar.ray_cast') 485 | t_lidar_bp.set_attribute('sensor_tick', str(1/fps)) 486 | t_lidar_bp.set_attribute('channels', '64') 487 | t_lidar_bp.set_attribute('points_per_second', '1120000') 488 | t_lidar_bp.set_attribute('upper_fov', '40') 489 | t_lidar_bp.set_attribute('lower_fov', '-40') 490 | t_lidar_bp.set_attribute('range', '100') 491 | t_lidar_bp.set_attribute('rotation_frequency', '20') 492 | t_lidar_transform = carla.Transform(carla.Location(x=1.5, z=2.4)) 493 | t_lidar = world.spawn_actor(t_lidar_bp, t_lidar_transform, attach_to=ego_vehicle) 494 | nonvehicles_list.append(t_lidar) 495 | t_lidar_queue = queue.Queue() 496 | t_lidar.listen(t_lidar_queue.put) 497 | q_list.append(t_lidar_queue) 498 | t_lidar_idx = idx 499 | idx = idx+1 500 | print('TRUE LIDAR ready') 501 | 502 | # Spawn semantic lidar 503 | lidar_sem_bp = world.get_blueprint_library().find('sensor.lidar.ray_cast_semantic') 504 | lidar_sem_bp.set_attribute('sensor_tick', str(1/fps)) 505 | lidar_sem_bp.set_attribute('channels', str(1024 * float(sem_res))) 506 | lidar_sem_bp.set_attribute('points_per_second', str(17920000 * float(sem_res))) # Both are set to extremely high resolutions to get objects better 507 | lidar_sem_bp.set_attribute('upper_fov', '90') 508 | lidar_sem_bp.set_attribute('lower_fov', '-90') 509 | lidar_sem_bp.set_attribute('range', '100') 510 | lidar_sem_bp.set_attribute('rotation_frequency', '20') 511 | lidar_sem_transform = carla.Transform(carla.Location(x=1.5, z=2.4)) 512 | lidar_sem = world.spawn_actor(lidar_sem_bp, lidar_sem_transform, attach_to=ego_vehicle) 513 | nonvehicles_list.append(lidar_sem) 514 | lidar_sem_queue = queue.Queue() 515 | lidar_sem.listen(lidar_sem_queue.put) 516 | q_list.append(lidar_sem_queue) 517 | lidar_sem_idx = idx 518 | idx = idx+1 519 | print("Semantic Segmentation camera ready") 520 | end_time = (time.time() - start_time) 521 | print("Initialization time: %.2f seconds" % end_time) 522 | start_sim_time = time.time() 523 | # Begin the loop 524 | time_sim = 0 525 | overall_time_sim = 0 526 | print("MAP:") 527 | print(chosen_map) 528 | while overall_time_sim <= time_limit: 529 | # Extract the available data 530 | nowFrame = world.tick() 531 | 532 | # Check whether it's time for sensor to capture data 533 | if time_sim >= (1/fps): # Capture at fps rate 534 | data = [retrieve_data(q,nowFrame) for q in q_list] 535 | assert all(x.frame == nowFrame for x in data if x is not None) 536 | # print("Frame: " + str(nowFrame)) 537 | # Skip if any sensor data is not available 538 | if None in data: 539 | continue 540 | 541 | vehicles_raw = world.get_actors().filter('vehicle.*') 542 | walkers_raw = world.get_actors().filter('walker.*') 543 | snap = data[tick_idx] 544 | rgb_img = data[cam_idx] 545 | lidar_img = data[lidar_idx] 546 | sem_img = data[lidar_sem_idx] 547 | cam_sem_img = data[cam_sem_idx] 548 | 549 | 550 | # Save all of the sensors for debug 551 | cva.show_lidar(cva.filter_lidar(lidar_img, cam, 100), cam, rgb_img, path=args.out) 552 | cva.show_sem_lidar(cva.filter_lidar(sem_img, cam, 100), cam, rgb_img, path=args.out) 553 | 554 | # Attach additional information to the snapshot 555 | vehicles = cva.snap_processing(vehicles_raw, snap, veh_check=ego_vehicle.id) 556 | walkers = cva.snap_processing(walkers_raw, snap) 557 | 558 | # Get header information 559 | ego_cur_speed = ego_vehicle.get_velocity() 560 | ego_cur_position = ego_vehicle.get_location() 561 | ego_veh_control = ego_vehicle.get_control() 562 | ego_throttle = str(ego_veh_control.throttle) 563 | ego_steer = str(ego_veh_control.steer) 564 | ego_brake = str(ego_veh_control.brake) 565 | ego_handbrake = str(ego_veh_control.hand_brake) 566 | ego_reverse = str(ego_veh_control.reverse) 567 | ego_manual_gear_shift = str(ego_veh_control.manual_gear_shift) 568 | ego_gear = str(ego_veh_control.gear) 569 | cam_rotation = cam.get_transform().rotation 570 | # Calculating visible bounding boxes 571 | # filtered_out,_ = cva.auto_annotate_lidar(vehicles, cam, lidar_img, show_img = rgb_img, json_path = 'vehicle_class_json_file.txt') 572 | vehicle_data = cva.auto_annotate_lidar_process(vehicles, cam, lidar_img, ego_cur_speed, ego_cur_position, max_dist = 100, min_detect = 5, show_img = None, gt_class = "Vehicle") 573 | walker_data = cva.auto_annotate_lidar_process(walkers, cam, lidar_img, ego_cur_speed, ego_cur_position, max_dist = 100, min_detect = 5, show_img = None, gt_class = "Pedestrian") 574 | # We can get the rest of the data from the semantic lidar 575 | debug_fname = str(args.out) + 'debug/frame_' + str(nowFrame) + '_signs.txt' 576 | debug_fname2 = str(args.out) + 'debug/frame_' + str(nowFrame) + '_lights.txt' 577 | ## !!TRACKING STEP 1 578 | traffic_signs_bboxes = point_cloud_to_3d_bbox(sem_img, [12], debug_fname) 579 | ## !!END TRACKING STEP 1 580 | traffic_lights_bboxes = point_cloud_to_3d_bbox(sem_img, [18], debug_fname2) 581 | 582 | # !!TRACKING STEP 2 583 | if traffic_signs_bboxes: #Not empty 584 | traffic_sign_ids = [] 585 | for key in traffic_signs_bboxes.keys(): 586 | traffic_sign_ids.append(int(key)) 587 | traffic_signs = cva.snap_processing_manual_bbox(traffic_sign_ids, snap, traffic_signs_bboxes) 588 | traffic_signs_data = cva.semantic_auto_annotate(traffic_signs, cam, lidar_img, ego_cur_speed, ego_cur_position, max_dist = 100, min_detect = 5, show_img = None, gt_class = "Traffic Sign") 589 | else: 590 | traffic_signs_data = {} 591 | # !!END TRACKING STEP 2 592 | 593 | if traffic_lights_bboxes: 594 | traffic_light_ids = [] 595 | for key in traffic_lights_bboxes.keys(): 596 | traffic_light_ids.append(int(key)) 597 | traffic_lights = cva.snap_processing_manual_bbox(traffic_light_ids, snap, traffic_lights_bboxes) 598 | traffic_lights_data = cva.semantic_auto_annotate(traffic_lights, cam, lidar_img, ego_cur_speed, ego_cur_position, max_dist = 100, min_detect = 5, show_img = None, gt_class = "Traffic Light") 599 | else: 600 | traffic_lights_data = {} 601 | 602 | # Merge all of the data into a single dictionary 603 | ## !!TRACKING STEP 3 604 | out_data = {**vehicle_data, **walker_data, **traffic_signs_data, **traffic_lights_data} 605 | ## !!END TRACKING STEP 3 606 | # Save the results 607 | cva.save_output_img(rgb_img, out_data, path=args.out, save_patched=True) 608 | cam_sem_img.save_to_disk(args.out + "out_sem_cam/%06d.png" % cam_sem_img.frame, carla.ColorConverter.CityScapesPalette) 609 | # Now we save the pertinent information, in a special folder 610 | dirname = str(args.out) + 'frame_' + str(nowFrame) + '/' 611 | if not os.path.exists(dirname): 612 | os.makedirs(os.path.dirname(dirname)) 613 | # Dump lidar measurements 614 | if args.lidar == 'y': 615 | # Below is a method to save to CSV 616 | # However, this is time consuming 617 | # Uncomment it if you want your lidar data in CSV, but are OK with longer runtime 618 | # t_lidar_img = data[t_lidar_idx] 619 | # csv_columns_li = ['x', 'y', 'z', 'intensity'] 620 | # csv_file_li = str(dirname) + 'lidar_' + str(nowFrame) + '.csv' 621 | # lidar_dump = [] 622 | # for point in t_lidar_img: 623 | # lidar_out = {'x': point.point.x, 'y': point.point.y, 'z': point.point.z, 'intensity': point.intensity} 624 | # lidar_dump.append(lidar_out) 625 | # try: 626 | # with open(csv_file_li, 'w', newline='') as csvfile: 627 | # writer = csv.DictWriter(csvfile, fieldnames=csv_columns_li) 628 | # writer.writeheader() 629 | # for data in lidar_dump: 630 | # writer.writerow(data) 631 | # except IOError: 632 | # print("I/O error") 633 | t_lidar_img = data[t_lidar_idx] 634 | ply_file = str(dirname) + 'lidar_' + str(nowFrame) + '.ply' 635 | t_lidar_img.save_to_disk(ply_file) 636 | 637 | # First save header info in a txt file 638 | headerfile = open(str(dirname) + 'frame_' + str(nowFrame) + '.txt', "w") 639 | headerfile.write("Ego Vehicle Speed: " + "[" + str(ego_cur_speed.x) + ", " + str(ego_cur_speed.y) + ", " + str(ego_cur_speed.z) + "]\n") 640 | headerfile.write("Ego Vehicle Position: " + "[" + str(ego_cur_position.x) + ", " + str(ego_cur_position.y) + ", " + str(ego_cur_position.z) + "]\n") 641 | headerfile.write("Ego Vehicle Throttle: " + ego_throttle + "\n") 642 | headerfile.write("Ego Vehicle Steer: " + ego_steer + "\n") 643 | headerfile.write("Ego Vehicle Brake: " + ego_brake + "\n") 644 | headerfile.write("Ego Vehicle Handbrake: " + ego_handbrake + "\n") 645 | headerfile.write("Ego Vehicle Reverse: " + ego_reverse + "\n") 646 | headerfile.write("Ego Vehicle Manual Gear Shift: " + ego_manual_gear_shift + "\n") 647 | headerfile.write("Ego Vehicle Gear: " + ego_gear + "\n") 648 | headerfile.write("Camera Pitch (Degrees): " + str(cam_rotation.pitch) + "\n") 649 | headerfile.write("Camera Yaw (Degrees): " + str(cam_rotation.yaw) + "\n") 650 | headerfile.write("Camera Roll (Degrees): " + str(cam_rotation.roll) + "\n") 651 | headerfile.close() 652 | # Then save a csv file with all of the values 653 | csv_columns = ['id', 'bbox', 'location', 'class', 'rel_velocity', 'distance'] 654 | csv_file = str(dirname) + 'frame_' + str(nowFrame) + '.csv' 655 | # Reprocess the out data with the extra ID parameter, with everything in one 656 | out_data_csv = [] 657 | for key, value in out_data.items(): 658 | cur_out = {**{'id': key}, **value} 659 | out_data_csv.append(cur_out) 660 | 661 | try: 662 | with open(csv_file, 'w', newline='') as csvfile: 663 | writer = csv.DictWriter(csvfile, fieldnames=csv_columns) 664 | writer.writeheader() 665 | for data in out_data_csv: 666 | writer.writerow(data) 667 | except IOError: 668 | print("I/O error") 669 | 670 | time_sim = 0 671 | time_sim = time_sim + settings.fixed_delta_seconds 672 | overall_time_sim = overall_time_sim + settings.fixed_delta_seconds 673 | 674 | finally: 675 | if start_sim_time is not None: 676 | end_sim_time = (time.time() - start_sim_time) 677 | print("Simulation runtime: %.2f seconds" % end_sim_time) 678 | try: 679 | cam.stop() 680 | cam_sem.stop() 681 | lidar.stop() 682 | lidar_sem.stop() 683 | if args.lidar == 'y': 684 | t_lidar.stop() 685 | except: 686 | print('Sensors has not been initiated') 687 | 688 | settings = world.get_settings() 689 | settings.synchronous_mode = False 690 | settings.fixed_delta_seconds = None 691 | world.apply_settings(settings) 692 | 693 | print('\ndestroying %d vehicles' % len(vehicles_list)) 694 | client.apply_batch([carla.command.DestroyActor(x) for x in vehicles_list]) 695 | 696 | print('destroying %d nonvehicles' % len(nonvehicles_list)) 697 | client.apply_batch([carla.command.DestroyActor(x) for x in nonvehicles_list]) 698 | 699 | # stop walker controllers (list is [controller, actor, controller, actor ...]) 700 | for i in range(0, len(all_id), 2): 701 | all_actors[i].stop() 702 | 703 | print('\ndestroying %d walkers' % len(walkers_list)) 704 | client.apply_batch([carla.command.DestroyActor(x) for x in all_id]) 705 | 706 | 707 | time.sleep(0.5) 708 | 709 | 710 | if __name__ == '__main__': 711 | 712 | try: 713 | main() 714 | except KeyboardInterrupt: 715 | pass 716 | finally: 717 | print('\ndone.') 718 | --------------------------------------------------------------------------------