├── README.md
├── weather.py.py
├── spawn.py.py
├── generate.py.py
└── extract.py.py
/README.md:
--------------------------------------------------------------------------------
1 | Automatic Ground Truth Generation in CARLA
2 | =============================================
3 |
4 | This Python API gets a variety of data from CARLA Simulator and also helps automatically generate the Ground Truth according to the Darknet YOLO Framework. The Ground Truth on the objects provided are the vehicle and the pedestrian.
5 |
6 |
7 |
8 | Installation
9 | --------------
10 | > cd [carla directory]/PythonAPI/examples
11 | > git clone https://github.com/AveesLab/CarFree.git
12 |
13 | Extraction of CARLA Data
14 | ---------------------------
15 | This API helps you information about the RGB Image, the Semantic Segmentation Image and the Bounding Box of Objects from the CARLA Server. In addition, it allows users to capture these informations at regular intervals and keyboard inputs.
16 |
17 | * CARLA Server on
18 | > ./CarlaUE4.sh
19 |
20 | * Spawn Object and Change Various Weathers (CARLA Python example). Link: [https://github.com/carla-simulator/carla/tree/master/PythonAPI/examples]
21 | Spawn NPC (Vehicles and Walkers).
22 | > ./spawn.py -n *N* -w *W*
23 | > ./weather.py
24 |
25 | **Execute Generating Code**
26 | Obtain information about the RGB Image, the Semantic segmentation Image and Bounding Boxes for Ground Truth Generation from CARLA.
27 | When a period (loop N) is specified, data capture is executed at that interval. This API helps users control the start and end of automatic capture through the keyboard input. It also provides a feature that allows users to capture the scene they want immediately at the moment.
28 | > ./extract.py -l *N*
29 |
30 | These datas (RGB, Segmentation and Bounding Box) are stored in different folders at the same time.
31 |
32 |
33 | Generation of Ground Truth for Darknet YOLO Framework
34 | ------------------------
35 | This API helps to post-process the data obtained above and to create a Ground Truth for Darknet YOLO Framework.
36 | > ./generate.py
37 |
38 | Data for learning is stored in a folder 'custom_data', and results of the bounding box generated in folder 'draw_bounding_box' can be seen.
39 | It will create .txt file for each .png rgb image file with the same name in the same directory. Ground truth formet of Darknet is shown below.
40 | > [object's class] [x_center] [y_center] [image_width] [image_height]
41 |
42 | * Object number
43 | Vehicle : 0, Person (Pedestrian) : 1
44 | * Other data
45 | Datas without object's class number are expressed between 0 and 1.
46 |
47 | * In addition, the image path to train is saved in 'my_data' folder in the form of a .txt file.
48 | > custom_data/rgb1.png
49 | > custom_data/rgb2.png
50 | > custom_data/rgb3.png
51 |
--------------------------------------------------------------------------------
/weather.py.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | # Copyright (c) 2019 Computer Vision Center (CVC) at the Universitat Autonoma de
4 | # Barcelona (UAB).
5 | #
6 | # This work is licensed under the terms of the MIT license.
7 | # For a copy, see .
8 |
9 | """
10 | CARLA Dynamic Weather:
11 |
12 | Connect to a CARLA Simulator instance and control the weather. Change Sun
13 | position smoothly with time and generate storms occasionally.
14 | """
15 |
16 | import glob
17 | import os
18 | import sys
19 |
20 | try:
21 | sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
22 | sys.version_info.major,
23 | sys.version_info.minor,
24 | 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
25 | except IndexError:
26 | pass
27 |
28 | import carla
29 |
30 | import argparse
31 | import math
32 |
33 |
34 | def clamp(value, minimum=0.0, maximum=100.0):
35 | return max(minimum, min(value, maximum))
36 |
37 |
38 | class Sun(object):
39 | def __init__(self, azimuth, altitude):
40 | self.azimuth = azimuth
41 | self.altitude = altitude
42 | self._t = 0.0
43 |
44 | def tick(self, delta_seconds):
45 | self._t += 0.008 * delta_seconds
46 | self._t %= 2.0 * math.pi
47 | self.azimuth += 0.25 * delta_seconds
48 | self.azimuth %= 360.0
49 | self.altitude = 35.0 * (math.sin(self._t) + 1.0)
50 | if self.altitude < -80:
51 | self.altitude = -60
52 | if self.altitude > 40:
53 | self.altitude = 0
54 |
55 | def __str__(self):
56 | return 'Sun(%.2f, %.2f)' % (self.azimuth, self.altitude)
57 |
58 |
59 | class Storm(object):
60 | def __init__(self, precipitation):
61 | self._t = precipitation if precipitation > 0.0 else -50.0
62 | self._increasing = True
63 | self.clouds = 0.0
64 | self.rain = 0.0
65 | self.puddles = 0.0
66 | self.wind = 0.0
67 |
68 | def tick(self, delta_seconds):
69 | delta = (1.3 if self._increasing else -1.3) * delta_seconds
70 | self._t = clamp(delta + self._t, -250.0, 100.0)
71 | print(delta_seconds)
72 | self.clouds = clamp(self._t + 40.0, 0.0, 90.0) + 70
73 | self.rain = clamp(self._t, 30.0, 100.0) + 50
74 | delay = -10.0 if self._increasing else 90.0
75 | self.puddles = clamp(self._t + delay, 0.0, 100.0)
76 | self.wind = clamp(self._t - delay, 0.0, 100.0)
77 | if self._t == -250.0:
78 | self._increasing = True
79 | if self._t == 100.0:
80 | self._increasing = False
81 |
82 | def __str__(self):
83 | return 'Storm(clouds=%d%%, rain=%d%%, wind=%d%%)' % (self.clouds, self.rain, self.wind)
84 |
85 |
86 | class Weather(object):
87 | def __init__(self, weather):
88 | self.weather = weather
89 | self._sun = Sun(weather.sun_azimuth_angle, weather.sun_altitude_angle)
90 | self._storm = Storm(weather.precipitation)
91 |
92 | def tick(self, delta_seconds):
93 | self._sun.tick(delta_seconds)
94 | self._storm.tick(delta_seconds)
95 | self.weather.cloudyness = self._storm.clouds
96 | self.weather.precipitation = self._storm.rain
97 | self.weather.precipitation_deposits = self._storm.puddles
98 | self.weather.wind_intensity = self._storm.wind
99 | self.weather.sun_azimuth_angle = self._sun.azimuth
100 | self.weather.sun_altitude_angle = self._sun.altitude
101 |
102 | def __str__(self):
103 | return '%s %s' % (self._sun, self._storm)
104 |
105 |
106 | def main():
107 | argparser = argparse.ArgumentParser(
108 | description=__doc__)
109 | argparser.add_argument(
110 | '--host',
111 | metavar='H',
112 | default='127.0.0.1',
113 | help='IP of the host server (default: 127.0.0.1)')
114 | argparser.add_argument(
115 | '-p', '--port',
116 | metavar='P',
117 | default=2000,
118 | type=int,
119 | help='TCP port to listen to (default: 2000)')
120 | argparser.add_argument(
121 | '-s', '--speed',
122 | metavar='FACTOR',
123 | default=1.0,
124 | type=float,
125 | help='rate at which the weather changes (default: 1.0)')
126 | args = argparser.parse_args()
127 |
128 | speed_factor = args.speed
129 | update_freq = 0.1 / speed_factor
130 |
131 | client = carla.Client(args.host, args.port)
132 | client.set_timeout(2.0)
133 | world = client.get_world()
134 |
135 | weather = Weather(world.get_weather())
136 |
137 | elapsed_time = 0.0
138 |
139 | while True:
140 | timestamp = world.wait_for_tick(seconds=30.0).timestamp
141 | elapsed_time += timestamp.delta_seconds
142 | if elapsed_time > update_freq:
143 | weather.tick(speed_factor * elapsed_time)
144 | world.set_weather(weather.weather)
145 | sys.stdout.write('\r' + str(weather) + 12 * ' ')
146 | sys.stdout.flush()
147 | elapsed_time = 0.0
148 |
149 |
150 | if __name__ == '__main__':
151 |
152 | main()
153 |
--------------------------------------------------------------------------------
/spawn.py.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | # Copyright (c) 2019 Computer Vision Center (CVC) at the Universitat Autonoma de
4 | # Barcelona (UAB).
5 | #
6 | # This work is licensed under the terms of the MIT license.
7 | # For a copy, see .
8 |
9 | """Spawn NPCs into the simulation"""
10 |
11 | import glob
12 | import os
13 | import sys
14 | import time
15 |
16 | try:
17 | sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
18 | sys.version_info.major,
19 | sys.version_info.minor,
20 | 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
21 | except IndexError:
22 | pass
23 |
24 | import carla
25 |
26 | import argparse
27 | import logging
28 | import random
29 |
30 |
31 | def main():
32 | argparser = argparse.ArgumentParser(
33 | description=__doc__)
34 | argparser.add_argument(
35 | '--host',
36 | metavar='H',
37 | default='127.0.0.1',
38 | help='IP of the host server (default: 127.0.0.1)')
39 | argparser.add_argument(
40 | '-p', '--port',
41 | metavar='P',
42 | default=2000,
43 | type=int,
44 | help='TCP port to listen to (default: 2000)')
45 | argparser.add_argument(
46 | '-n', '--number-of-vehicles',
47 | metavar='N',
48 | default=10,
49 | type=int,
50 | help='number of vehicles (default: 10)')
51 | argparser.add_argument(
52 | '-w', '--number-of-walkers',
53 | metavar='W',
54 | default=50,
55 | type=int,
56 | help='number of walkers (default: 50)')
57 | argparser.add_argument(
58 | '--safe',
59 | action='store_true',
60 | help='avoid spawning vehicles prone to accidents')
61 | argparser.add_argument(
62 | '--filterv',
63 | metavar='PATTERN',
64 | default='vehicle.*',
65 | help='vehicles filter (default: "vehicle.*")')
66 | argparser.add_argument(
67 | '--filterw',
68 | metavar='PATTERN',
69 | default='walker.pedestrian.*',
70 | help='pedestrians filter (default: "walker.pedestrian.*")')
71 | args = argparser.parse_args()
72 |
73 | logging.basicConfig(format='%(levelname)s: %(message)s', level=logging.INFO)
74 |
75 | vehicles_list = []
76 | walkers_list = []
77 | all_id = []
78 | client = carla.Client(args.host, args.port)
79 | client.set_timeout(2.0)
80 |
81 | try:
82 |
83 | world = client.get_world()
84 | blueprints = world.get_blueprint_library().filter(args.filterv)
85 | blueprintsWalkers = world.get_blueprint_library().filter(args.filterw)
86 |
87 | if args.safe:
88 | blueprints = [x for x in blueprints if int(x.get_attribute('number_of_wheels')) == 4]
89 | blueprints = [x for x in blueprints if not x.id.endswith('isetta')]
90 | blueprints = [x for x in blueprints if not x.id.endswith('carlacola')]
91 |
92 | spawn_points = world.get_map().get_spawn_points()
93 | number_of_spawn_points = len(spawn_points)
94 |
95 | if args.number_of_vehicles < number_of_spawn_points:
96 | random.shuffle(spawn_points)
97 | elif args.number_of_vehicles > number_of_spawn_points:
98 | msg = 'requested %d vehicles, but could only find %d spawn points'
99 | logging.warning(msg, args.number_of_vehicles, number_of_spawn_points)
100 | args.number_of_vehicles = number_of_spawn_points
101 |
102 | # @todo cannot import these directly.
103 | SpawnActor = carla.command.SpawnActor
104 | SetAutopilot = carla.command.SetAutopilot
105 | FutureActor = carla.command.FutureActor
106 |
107 | # --------------
108 | # Spawn vehicles
109 | # --------------
110 | batch = []
111 | for n, transform in enumerate(spawn_points):
112 | if n >= args.number_of_vehicles:
113 | break
114 | blueprint = random.choice(blueprints)
115 | if blueprint.tags[0] != "crossbike" and blueprint.tags[0] != "low rider" and blueprint.tags[0] != "ninja" and blueprint.tags[0] != "yzf" and blueprint.tags[0] != "century" and blueprint.tags[0] != "omafiets" and blueprint.tags[0] != "diamondback" and blueprint.tags[0] != "carlacola":
116 | if blueprint.has_attribute('color'):
117 | color = random.choice(blueprint.get_attribute('color').recommended_values)
118 | blueprint.set_attribute('color', color)
119 | if blueprint.has_attribute('driver_id'):
120 | driver_id = random.choice(blueprint.get_attribute('driver_id').recommended_values)
121 | blueprint.set_attribute('driver_id', driver_id)
122 | blueprint.set_attribute('role_name', 'autopilot')
123 | batch.append(SpawnActor(blueprint, transform).then(SetAutopilot(FutureActor, True)))
124 |
125 | for response in client.apply_batch_sync(batch):
126 | if response.error:
127 | logging.error(response.error)
128 | else:
129 | vehicles_list.append(response.actor_id)
130 |
131 | # -------------
132 | # Spawn Walkers
133 | # -------------
134 | # some settings
135 | percentagePedestriansRunning = 0.0 # how many pedestrians will run
136 | percentagePedestriansCrossing = 0.0 # how many pedestrians will walk through the road
137 | # 1. take all the random locations to spawn
138 | spawn_points = []
139 | for i in range(args.number_of_walkers):
140 | spawn_point = carla.Transform()
141 | loc = world.get_random_location_from_navigation()
142 | if (loc != None):
143 | spawn_point.location = loc
144 | spawn_points.append(spawn_point)
145 | # 2. we spawn the walker object
146 | batch = []
147 | walker_speed = []
148 | for spawn_point in spawn_points:
149 | walker_bp = random.choice(blueprintsWalkers)
150 | # set as not invencible
151 | if walker_bp.has_attribute('is_invincible'):
152 | walker_bp.set_attribute('is_invincible', 'false')
153 | # set the max speed
154 | if walker_bp.has_attribute('speed'):
155 | if (random.random() > percentagePedestriansRunning):
156 | # walking
157 | walker_speed.append(walker_bp.get_attribute('speed').recommended_values[1])
158 | else:
159 | # running
160 | walker_speed.append(walker_bp.get_attribute('speed').recommended_values[2])
161 | else:
162 | print("Walker has no speed")
163 | walker_speed.append(0.0)
164 | batch.append(SpawnActor(walker_bp, spawn_point))
165 | results = client.apply_batch_sync(batch, True)
166 | walker_speed2 = []
167 | for i in range(len(results)):
168 | if results[i].error:
169 | logging.error(results[i].error)
170 | else:
171 | walkers_list.append({"id": results[i].actor_id})
172 | walker_speed2.append(walker_speed[i])
173 | walker_speed = walker_speed2
174 | #walker_speed2 = []
175 | #walker_speed = []
176 | # 3. we spawn the walker controller
177 | batch = []
178 | walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker')
179 | for i in range(len(walkers_list)):
180 | batch.append(SpawnActor(walker_controller_bp, carla.Transform(), walkers_list[i]["id"]))
181 | results = client.apply_batch_sync(batch, True)
182 | for i in range(len(results)):
183 | if results[i].error:
184 | logging.error(results[i].error)
185 | else:
186 | walkers_list[i]["con"] = results[i].actor_id
187 | # 4. we put altogether the walkers and controllers id to get the objects from their id
188 | for i in range(len(walkers_list)):
189 | all_id.append(walkers_list[i]["con"])
190 | all_id.append(walkers_list[i]["id"])
191 | all_actors = world.get_actors(all_id)
192 | #all_actors = world.get_actors(all_id)
193 |
194 | # wait for a tick to ensure client receives the last transform of the walkers we have just created
195 | world.wait_for_tick()
196 |
197 | # 5. initialize each controller and set target to walk to (list is [controler, actor, controller, actor ...])
198 | # set how many pedestrians can cross the road
199 | world.set_pedestrians_cross_factor(percentagePedestriansCrossing)
200 | for i in range(0, len(all_id), 2):
201 | # start walker
202 | all_actors[i].start()
203 | # set walk to random point
204 | all_actors[i].go_to_location(world.get_random_location_from_navigation())
205 | # max speed
206 | all_actors[i].set_max_speed(float(walker_speed[int(i/2)]))
207 |
208 | print('spawned %d vehicles and %d walkers, press Ctrl+C to exit.' % (len(vehicles_list), len(walkers_list)))
209 |
210 | while True:
211 | world.wait_for_tick()
212 |
213 | finally:
214 |
215 | print('\ndestroying %d vehicles' % len(vehicles_list))
216 | client.apply_batch([carla.command.DestroyActor(x) for x in vehicles_list])
217 |
218 | # stop walker controllers (list is [controller, actor, controller, actor ...])
219 | for i in range(0, len(all_id), 2):
220 | all_actors[i].stop()
221 |
222 | print('\ndestroying %d walkers' % len(walkers_list))
223 | client.apply_batch([carla.command.DestroyActor(x) for x in all_id])
224 |
225 | time.sleep(0.5)
226 |
227 | if __name__ == '__main__':
228 |
229 | try:
230 | main()
231 | except KeyboardInterrupt:
232 | pass
233 | finally:
234 | print('\ndone.')
235 |
--------------------------------------------------------------------------------
/generate.py.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import glob
4 | import os
5 | import sys
6 |
7 | try:
8 | sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
9 | sys.version_info.major,
10 | sys.version_info.minor,
11 | 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
12 | except IndexError:
13 | pass
14 |
15 |
16 | # ==============================================================================
17 | # -- imports -------------------------------------------------------------------
18 | # ==============================================================================
19 |
20 | import copy
21 | import os.path
22 | import time
23 |
24 | import carla
25 |
26 | from carla import ColorConverter as cc
27 |
28 | import cv2
29 | import re
30 |
31 | try:
32 | import numpy as np
33 | except ImportError:
34 | raise RuntimeError('cannot import numpy, make sure numpy package is installed')
35 |
36 | # Create Directory ################
37 | dir_my = 'my_data/'
38 | dir_custom = 'custom_data/'
39 | dir_draw = 'draw_bounding_box/'
40 | if not os.path.exists(dir_my):
41 | os.makedirs(dir_my)
42 | if not os.path.exists(dir_custom):
43 | os.makedirs(dir_custom)
44 | if not os.path.exists(dir_draw):
45 | os.makedirs(dir_draw)
46 | ###################################
47 |
48 | dataEA = len(os.walk('VehicleBBox/').next()[2])
49 |
50 | VIEW_WIDTH = 1920//2
51 | VIEW_HEIGHT = 1080//2
52 | VIEW_FOV = 90
53 |
54 | VBB_COLOR = (0, 0, 255)
55 | WBB_COLOR = (255, 0, 0)
56 |
57 | Vehicle_COLOR = np.array([142, 0, 0])
58 | Walker_COLOR = np.array([60, 20, 220])
59 |
60 | rgb_info = np.zeros((VIEW_HEIGHT, VIEW_WIDTH, 3), dtype="i")
61 | seg_info = np.zeros((VIEW_HEIGHT, VIEW_WIDTH, 3), dtype="i")
62 | area_info = np.zeros(shape=[VIEW_HEIGHT, VIEW_WIDTH, 3], dtype=np.uint8)
63 | index_count = 0
64 |
65 | # Brings Images and Bounding Box Information
66 | def reading_data(index):
67 | global rgb_info, seg_info
68 | v_data = []
69 | w_data = []
70 | k = 0
71 | w = 0
72 |
73 | rgb_img = cv2.imread('custom_data/image'+ str(index)+ '.png', cv2.IMREAD_COLOR)
74 | seg_img = cv2.imread('SegmentationImage/seg'+ str(index)+ '.png', cv2.IMREAD_COLOR)
75 |
76 | if str(rgb_img) != "None" and str(seg_img) != "None":
77 | # Vehicle
78 | with open('VehicleBBox/bbox'+ str(index), 'r') as fin:
79 | v_bounding_box_rawdata = fin.read()
80 |
81 | v_bounding_box_data = re.findall(r"-?\d+", v_bounding_box_rawdata)
82 | v_line_length = len(v_bounding_box_data) / 16
83 |
84 | v_bbox_data = [[0 for col in range(8)] for row in range(v_line_length)]
85 |
86 | for i in range(len(v_bounding_box_data)/2):
87 | j = i*2
88 | v_data.append(tuple((int(v_bounding_box_data[j]), int(v_bounding_box_data[j+1]))))
89 |
90 | for i in range(len(v_bounding_box_data)/16):
91 | for j in range(8):
92 | v_bbox_data[i][j] = v_data[k]
93 | k += 1
94 |
95 | # Walker (Pedestrian)
96 | with open('PedestrianBBox/bbox'+ str(index), 'r') as w_fin:
97 | w_bounding_box_rawdata = w_fin.read()
98 |
99 | w_bounding_box_data = re.findall(r"-?\d+", w_bounding_box_rawdata)
100 | w_line_length = len(w_bounding_box_data) / 16
101 |
102 | w_bb_data = [[0 for col in range(8)] for row in range(w_line_length)]
103 |
104 | for i in range(len(w_bounding_box_data)/2):
105 | j = i*2
106 | w_data.append(tuple((int(w_bounding_box_data[j]), int(w_bounding_box_data[j+1]))))
107 |
108 | for i in range(len(w_bounding_box_data)/16):
109 | for j in range(8):
110 | w_bb_data[i][j] = w_data[w]
111 | w += 1
112 |
113 | origin_rgb_info = rgb_img
114 | rgb_info = rgb_img
115 | seg_info = seg_img
116 | return v_bbox_data, v_line_length, w_bb_data, w_line_length
117 |
118 | else:
119 | return False
120 |
121 | # Converts 8 Vertices to 4 Vertices
122 | def converting(bounding_boxes, line_length):
123 | points_array = []
124 | bb_4data = [[0 for col in range(4)] for row in range(line_length)]
125 | k = 0
126 | for i in range(line_length):
127 | points_array_x = []
128 | points_array_y = []
129 | for j in range(8):
130 | points_array_x.append(bounding_boxes[i][j][0])
131 | points_array_y.append(bounding_boxes[i][j][1])
132 |
133 | max_x = max(points_array_x)
134 | min_x = min(points_array_x)
135 | max_y = max(points_array_y)
136 | min_y = min(points_array_y)
137 |
138 | points_array.append(tuple((min_x, min_y)))
139 | points_array.append(tuple((max_x, min_y)))
140 | points_array.append(tuple((max_x, max_y)))
141 | points_array.append(tuple((min_x, max_y)))
142 |
143 | for i in range(line_length):
144 | for j in range(len(points_array)/line_length):
145 | bb_4data[i][j] = points_array[k]
146 | k += 1
147 |
148 | return bb_4data
149 |
150 | # Gets Object's Bounding Box Area
151 | def object_area(data):
152 | global area_info
153 | area_info = np.zeros(shape=[VIEW_HEIGHT, VIEW_WIDTH, 3], dtype=np.uint8)
154 |
155 | for vehicle_area in data:
156 | array_x = []
157 | array_y = []
158 | for i in range(4):
159 | array_x.append(vehicle_area[i][0])
160 | for j in range(4):
161 | array_y.append(vehicle_area[j][1])
162 |
163 | for i in range(4):
164 | if array_x[i] <= 0:
165 | array_x[i] = 1
166 | elif array_x[i] >= VIEW_WIDTH:
167 | array_x[i] = VIEW_WIDTH -1
168 | for j in range(4):
169 | if array_y[j] <= 0:
170 | array_y[j] = 1
171 | elif array_y[j] >= VIEW_HEIGHT:
172 | array_y[j] = VIEW_HEIGHT -1
173 |
174 | min_x = min(array_x)
175 | max_x = max(array_x)
176 | min_y = min(array_y)
177 | max_y = max(array_y)
178 | array = [min_x, max_x, min_y, max_y]
179 | if filtering(array, Vehicle_COLOR):
180 | cv2.rectangle(area_info, (min_x, min_y), (max_x, max_y), Vehicle_COLOR, -1)
181 |
182 | # Fits Bounding Box to the Object
183 | def fitting_x(x1, x2, range_min, range_max, color):
184 | global seg_info
185 | state = False
186 | cali_point = 0
187 | if (x1 < x2):
188 | for search_point in range(x1, x2):
189 | for range_of_points in range(range_min, range_max):
190 | if seg_info[range_of_points, search_point][0] == color[0]:
191 | cali_point = search_point
192 | state = True
193 | break
194 | if state == True:
195 | break
196 |
197 | else:
198 | for search_point in range(x1, x2, -1):
199 | for range_of_points in range(range_min, range_max):
200 | if seg_info[range_of_points, search_point][0] == color[0]:
201 | cali_point = search_point
202 | state = True
203 | break
204 | if state == True:
205 | break
206 |
207 | return cali_point
208 |
209 | def fitting_y(y1, y2, range_min, range_max, color):
210 | global seg_info
211 | state = False
212 | cali_point = 0
213 | if (y1 < y2):
214 | for search_point in range(y1, y2):
215 | for range_of_points in range(range_min, range_max):
216 | if seg_info[search_point, range_of_points][0] == color[0]:
217 | cali_point = search_point
218 | state = True
219 | break
220 | if state == True:
221 | break
222 |
223 | else:
224 | for search_point in range(y1, y2, -1):
225 | for range_of_points in range(range_min, range_max):
226 | if seg_info[search_point, range_of_points][0] == color[0]:
227 | cali_point = search_point
228 | state = True
229 | break
230 | if state == True:
231 | break
232 |
233 | return cali_point
234 |
235 | # Removes small objects that obstruct to learning
236 | def small_objects_excluded(array, bb_min):
237 | diff_x = array[1]- array[0]
238 | diff_y = array[3] - array[2]
239 | if (diff_x > bb_min and diff_y > bb_min):
240 | return True
241 | return False
242 |
243 | # Filters occluded objects
244 | def post_occluded_objects_excluded(array, color):
245 | global seg_info
246 | top_left = seg_info[array[2]+1, array[0]+1][0]
247 | top_right = seg_info[array[2]+1, array[1]-1][0]
248 | bottom_left = seg_info[array[3]-1, array[0]+1][0]
249 | bottom_right = seg_info[array[3]-1, array[1]-1][0]
250 | if top_left == color[0] and top_right == color[0] and bottom_left == color[0] and bottom_right == color[0]:
251 | return False
252 |
253 | return True
254 |
255 | def pre_occluded_objects_excluded(array, area_image, color):
256 | top_left = area_image[array[2]-1, array[0]-1][0]
257 | top_right = area_image[array[2], array[1]+1][0]
258 | bottom_left = area_image[array[3]+1, array[1]-1][0]
259 | bottom_right = area_image[array[3]+1, array[0]+1][0]
260 | if top_left == color[0] and top_right == color[0] and bottom_left == color[0] and bottom_right == color[0]:
261 | return False
262 |
263 | return True
264 |
265 | # Filters objects not in the scene
266 | def filtering(array, color):
267 | global seg_info
268 | for y in range(array[2], array[3]):
269 | for x in range(array[0], array[1]):
270 | if seg_info[y, x][0] == color[0]:
271 | return True
272 | return False
273 |
274 | # Processes Post-Processing
275 | def processing(img, v_data, w_data, index):
276 | global seg_info, area_info
277 | vehicle_class = 0
278 | walker_class = 1
279 |
280 | object_area(v_data)
281 | f = open("custom_data/image"+str(index) + ".txt", 'w')
282 |
283 | # Vehicle
284 | for v_bbox in v_data:
285 | array_x = []
286 | array_y = []
287 | for i in range(4):
288 | array_x.append(v_bbox[i][0])
289 | for j in range(4):
290 | array_y.append(v_bbox[j][1])
291 |
292 | for i in range(4):
293 | if array_x[i] <= 0:
294 | array_x[i] = 1
295 | elif array_x[i] >= VIEW_WIDTH - 1:
296 | array_x[i] = VIEW_WIDTH - 2
297 | for j in range(4):
298 | if array_y[j] <= 0:
299 | array_y[j] = 1
300 | elif array_y[j] >= VIEW_HEIGHT - 1:
301 | array_y[j] = VIEW_HEIGHT - 2
302 |
303 | min_x = min(array_x)
304 | max_x = max(array_x)
305 | min_y = min(array_y)
306 | max_y = max(array_y)
307 | v_bb_array = [min_x, max_x, min_y, max_y]
308 | center_x = (min_x + max_x)//2
309 | center_y = (min_y + max_y)//2
310 |
311 | if filtering(v_bb_array, Vehicle_COLOR) and pre_occluded_objects_excluded(v_bb_array, area_info, Vehicle_COLOR):
312 | cali_min_x = fitting_x(min_x, max_x, min_y, max_y, Vehicle_COLOR)
313 | cali_max_x = fitting_x(max_x, min_x, min_y, max_y, Vehicle_COLOR)
314 | cali_min_y = fitting_y(min_y, max_y, min_x, max_x, Vehicle_COLOR)
315 | cali_max_y = fitting_y(max_y, min_y, min_x, max_x, Vehicle_COLOR)
316 | v_cali_array = [cali_min_x, cali_max_x, cali_min_y, cali_max_y]
317 |
318 | if small_objects_excluded(v_cali_array, 10) and post_occluded_objects_excluded(v_cali_array, Vehicle_COLOR):
319 | darknet_x = float((cali_min_x + cali_max_x) // 2) / float(VIEW_WIDTH)
320 | darknet_y = float((cali_min_y + cali_max_y) // 2) / float(VIEW_HEIGHT)
321 | darknet_width = float(cali_max_x - cali_min_x) / float(VIEW_WIDTH)
322 | darknet_height= float(cali_max_y - cali_min_y) / float(VIEW_HEIGHT)
323 |
324 | f.write(str(vehicle_class) + ' ' + str("%0.6f" % darknet_x) + ' ' + str("%0.6f" % darknet_y) + ' ' +
325 | str("%0.6f" % darknet_width) + ' ' + str("%0.6f" % darknet_height) + "\n")
326 |
327 | cv2.line(img, (cali_min_x, cali_min_y), (cali_max_x, cali_min_y), VBB_COLOR, 2)
328 | cv2.line(img, (cali_max_x, cali_min_y), (cali_max_x, cali_max_y), VBB_COLOR, 2)
329 | cv2.line(img, (cali_max_x, cali_max_y), (cali_min_x, cali_max_y), VBB_COLOR, 2)
330 | cv2.line(img, (cali_min_x, cali_max_y), (cali_min_x, cali_min_y), VBB_COLOR, 2)
331 |
332 | # Walker (Pedestrian)
333 | object_area(w_data)
334 |
335 | for wbbox in w_data:
336 | array_x = []
337 | array_y = []
338 |
339 | for i in range(4):
340 | array_x.append(wbbox[i][0])
341 | for j in range(4):
342 | array_y.append(wbbox[j][1])
343 |
344 | for i in range(4):
345 | if array_x[i] <= 0:
346 | array_x[i] = 1
347 | elif array_x[i] >= VIEW_WIDTH - 1:
348 | array_x[i] = VIEW_WIDTH - 2
349 | for j in range(4):
350 | if array_y[j] <= 0:
351 | array_y[j] = 1
352 | elif array_y[j] >= VIEW_HEIGHT - 1:
353 | array_y[j] = VIEW_HEIGHT - 2
354 |
355 | min_x = min(array_x)
356 | max_x = max(array_x)
357 | min_y = min(array_y)
358 | max_y = max(array_y)
359 | w_bb_array = [min_x, max_x, min_y, max_y]
360 | if filtering(w_bb_array, Walker_COLOR) and pre_occluded_objects_excluded(w_bb_array, area_info, Walker_COLOR):
361 | cali_min_x = fitting_x(min_x, max_x, min_y, max_y, Walker_COLOR)
362 | cali_max_x = fitting_x(max_x, min_x, min_y, max_y, Walker_COLOR)
363 | cali_min_y = fitting_y(min_y, max_y, min_x, max_x, Walker_COLOR)
364 | cali_max_y = fitting_y(max_y, min_y, min_x, max_x, Walker_COLOR)
365 | w_cali_array = [cali_min_x, cali_max_x, cali_min_y, cali_max_y]
366 |
367 | if small_objects_excluded(w_cali_array, 7) and post_occluded_objects_excluded(w_cali_array, Walker_COLOR):
368 | darknet_x = float((cali_min_x + cali_max_x) // 2) / float(VIEW_WIDTH)
369 | darknet_y = float((cali_min_y + cali_max_y) // 2) / float(VIEW_HEIGHT)
370 | darknet_width = float(cali_max_x - cali_min_x) / float(VIEW_WIDTH)
371 | darknet_height= float(cali_max_y - cali_min_y) / float(VIEW_HEIGHT)
372 |
373 | f.write(str(walker_class) + ' ' + str("%0.6f" % darknet_x) + ' ' + str("%0.6f" % darknet_y) + ' ' +
374 | str("%0.6f" % darknet_width) + ' ' + str("%0.6f" % darknet_height) + "\n")
375 |
376 | cv2.line(img, (cali_min_x, cali_min_y), (cali_max_x, cali_min_y), WBB_COLOR, 2)
377 | cv2.line(img, (cali_max_x, cali_min_y), (cali_max_x, cali_max_y), WBB_COLOR, 2)
378 | cv2.line(img, (cali_max_x, cali_max_y), (cali_min_x, cali_max_y), WBB_COLOR, 2)
379 | cv2.line(img, (cali_min_x, cali_max_y), (cali_min_x, cali_min_y), WBB_COLOR, 2)
380 |
381 | f.close()
382 | cv2.imwrite('draw_bounding_box/image'+str(index)+'.png', img)
383 |
384 | def run():
385 | global rgb_info
386 | global index_count
387 | train = open("my_data/train.txt", 'w')
388 |
389 | for i in range(dataEA + 1):
390 | if reading_data(i) != False:
391 | v_four_points = converting(reading_data(i)[0], reading_data(i)[1])
392 | w_four_points = converting(reading_data(i)[2], reading_data(i)[3])
393 | processing(rgb_info, v_four_points, w_four_points, i)
394 | train.write(str('custom_data/image'+str(i) + '.png') + "\n")
395 | index_count = index_count + 1
396 | print(i)
397 | train.close()
398 | print(index_count)
399 |
400 | if __name__ == "__main__":
401 | start = time.time()
402 |
403 | run()
404 |
405 | end = time.time()
406 | print(float(end - start))
407 |
--------------------------------------------------------------------------------
/extract.py.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | """
4 | An example of client-side bounding boxes with basic car controls.
5 |
6 | Controls:
7 | Welcome to CARLA for Getting Bounding Box Data.
8 | Use WASD keys for control.
9 | W : throttle
10 | S : brake
11 | AD : steer
12 | Space : hand-brake
13 | P : autopilot mode
14 | C : Capture Data
15 | l : Loop Capture Start
16 | L : Loop Capture End
17 |
18 | ESC : quit
19 | """
20 |
21 | # ==============================================================================
22 | # -- find carla module ---------------------------------------------------------
23 | # ==============================================================================
24 |
25 |
26 | import glob
27 | import os
28 | import sys
29 |
30 | try:
31 | sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
32 | sys.version_info.major,
33 | sys.version_info.minor,
34 | 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
35 | except IndexError:
36 | pass
37 |
38 |
39 | # ==============================================================================
40 | # -- imports -------------------------------------------------------------------
41 | # ==============================================================================
42 |
43 | import carla
44 |
45 | from carla import ColorConverter as cc
46 |
47 | import weakref
48 | import random
49 | import cv2
50 | import time
51 | import argparse
52 | import textwrap
53 |
54 | try:
55 | import pygame
56 | from pygame.locals import K_ESCAPE
57 | from pygame.locals import K_SPACE
58 | from pygame.locals import KMOD_SHIFT
59 | from pygame.locals import K_a
60 | from pygame.locals import K_d
61 | from pygame.locals import K_s
62 | from pygame.locals import K_w
63 | from pygame.locals import K_TAB
64 | from pygame.locals import K_BACKQUOTE
65 | from pygame.locals import K_p
66 | from pygame.locals import K_c
67 | from pygame.locals import K_l
68 | except ImportError:
69 | raise RuntimeError('cannot import pygame, make sure pygame package is installed')
70 |
71 | try:
72 | import numpy as np
73 | except ImportError:
74 | raise RuntimeError('cannot import numpy, make sure numpy package is installed')
75 |
76 | VIEW_WIDTH = 1920//2
77 | VIEW_HEIGHT = 1080//2
78 | VIEW_FOV = 90
79 |
80 | BB_COLOR = (248, 64, 24)
81 | WBB_COLOR = (0, 0, 255)
82 | vehicle_bbox_record = False
83 | pedestrian_bbox_record = False
84 | count = 0
85 |
86 | rgb_info = np.zeros((540, 960, 3), dtype="i")
87 | seg_info = np.zeros((540, 960, 3), dtype="i")
88 |
89 | # Creates Directory
90 | dir_rgb = 'custom_data/'
91 | dir_seg = 'SegmentationImage/'
92 | dir_pbbox = 'PedestrianBBox/'
93 | dir_vbbox = 'VehicleBBox/'
94 | if not os.path.exists(dir_rgb):
95 | os.makedirs(dir_rgb)
96 | if not os.path.exists(dir_seg):
97 | os.makedirs(dir_seg)
98 | if not os.path.exists(dir_pbbox):
99 | os.makedirs(dir_pbbox)
100 | if not os.path.exists(dir_vbbox):
101 | os.makedirs(dir_vbbox)
102 |
103 | # ==============================================================================
104 | # -- PedestrianBoundingBoxes ---------------------------------------------------
105 | # ==============================================================================
106 |
107 | class PedestrianBoundingBoxes(object):
108 | """
109 | This is a module responsible for creating 3D bounding boxes and drawing them
110 | client-side on pygame surface.
111 | """
112 |
113 | @staticmethod
114 | def get_bounding_boxes(pedestrians, camera):
115 | """
116 | Creates 3D bounding boxes based on carla Pedestrian list and camera.
117 | """
118 |
119 | bounding_boxes = [PedestrianBoundingBoxes.get_bounding_box(pedestrian, camera) for pedestrian in pedestrians]
120 | # filter objects behind camera
121 | bounding_boxes = [bb for bb in bounding_boxes if all(bb[:, 2] > 0)]
122 | return bounding_boxes
123 |
124 | @staticmethod
125 | def draw_bounding_boxes(display, bounding_boxes):
126 | """
127 | Draws bounding boxes on pygame display.
128 | """
129 | global pedestrian_bbox_record
130 | global count
131 |
132 | bb_surface = pygame.Surface((VIEW_WIDTH, VIEW_HEIGHT))
133 | bb_surface.set_colorkey((0, 0, 0))
134 |
135 | if pedestrian_bbox_record == True:
136 | f = open("PedestrianBBox/bbox"+str(count), 'w')
137 | print("PedestrianBoundingBox")
138 | for bbox in bounding_boxes:
139 | points = [(int(bbox[i, 0]), int(bbox[i, 1])) for i in range(8)]
140 |
141 | if pedestrian_bbox_record == True:
142 | f.write(str(points)+"\n")
143 |
144 | if pedestrian_bbox_record == True:
145 | f.close()
146 | pedestrian_bbox_record = False
147 |
148 | display.blit(bb_surface, (0, 0))
149 |
150 | @staticmethod
151 | def get_bounding_box(pedestrian, camera):
152 | """
153 | Returns 3D bounding box for a pedestrian based on camera view.
154 | """
155 |
156 | bb_cords = PedestrianBoundingBoxes._create_bb_points(pedestrian)
157 | cords_x_y_z = PedestrianBoundingBoxes._pedestrian_to_sensor(bb_cords, pedestrian, camera)[:3, :]
158 | cords_y_minus_z_x = np.concatenate([cords_x_y_z[1, :], -cords_x_y_z[2, :], cords_x_y_z[0, :]])
159 | bbox = np.transpose(np.dot(camera.calibration, cords_y_minus_z_x))
160 | camera_bbox = np.concatenate([bbox[:, 0] / bbox[:, 2], bbox[:, 1] / bbox[:, 2], bbox[:, 2]], axis=1)
161 | return camera_bbox
162 |
163 | @staticmethod
164 | def _create_bb_points(pedestrian):
165 | """
166 | Returns 3D bounding box for a pedestrian.
167 | """
168 |
169 | cords = np.zeros((8, 4))
170 | extent = pedestrian.bounding_box.extent
171 | cords[0, :] = np.array([extent.x, extent.y, -extent.z, 1])
172 | cords[1, :] = np.array([-extent.x, extent.y, -extent.z, 1])
173 | cords[2, :] = np.array([-extent.x, -extent.y, -extent.z, 1])
174 | cords[3, :] = np.array([extent.x, -extent.y, -extent.z, 1])
175 | cords[4, :] = np.array([extent.x, extent.y, extent.z, 1])
176 | cords[5, :] = np.array([-extent.x, extent.y, extent.z, 1])
177 | cords[6, :] = np.array([-extent.x, -extent.y, extent.z, 1])
178 | cords[7, :] = np.array([extent.x, -extent.y, extent.z, 1])
179 | return cords
180 |
181 | @staticmethod
182 | def _pedestrian_to_sensor(cords, pedestrian, sensor):
183 | """
184 | Transforms coordinates of a pedestrian bounding box to sensor.
185 | """
186 |
187 | world_cord = PedestrianBoundingBoxes._pedestrian_to_world(cords, pedestrian)
188 | sensor_cord = PedestrianBoundingBoxes._world_to_sensor(world_cord, sensor)
189 | return sensor_cord
190 |
191 | @staticmethod
192 | def _pedestrian_to_world(cords, pedestrian):
193 | """
194 | Transforms coordinates of a pedestrian bounding box to world.
195 | """
196 |
197 | bb_transform = carla.Transform(pedestrian.bounding_box.location)
198 | bb_pedestrian_matrix = PedestrianBoundingBoxes.get_matrix(bb_transform)
199 | pedestrian_world_matrix = PedestrianBoundingBoxes.get_matrix(pedestrian.get_transform())
200 | bb_world_matrix = np.dot(pedestrian_world_matrix, bb_pedestrian_matrix)
201 | world_cords = np.dot(bb_world_matrix, np.transpose(cords))
202 | return world_cords
203 |
204 | @staticmethod
205 | def _world_to_sensor(cords, sensor):
206 | """
207 | Transforms world coordinates to sensor.
208 | """
209 |
210 | sensor_world_matrix = PedestrianBoundingBoxes.get_matrix(sensor.get_transform())
211 | world_sensor_matrix = np.linalg.inv(sensor_world_matrix)
212 | sensor_cords = np.dot(world_sensor_matrix, cords)
213 | return sensor_cords
214 |
215 | @staticmethod
216 | def get_matrix(transform):
217 | """
218 | Creates matrix from carla transform.
219 | """
220 |
221 | rotation = transform.rotation
222 | location = transform.location
223 | c_y = np.cos(np.radians(rotation.yaw))
224 | s_y = np.sin(np.radians(rotation.yaw))
225 | c_r = np.cos(np.radians(rotation.roll))
226 | s_r = np.sin(np.radians(rotation.roll))
227 | c_p = np.cos(np.radians(rotation.pitch))
228 | s_p = np.sin(np.radians(rotation.pitch))
229 | matrix = np.matrix(np.identity(4))
230 | matrix[0, 3] = location.x
231 | matrix[1, 3] = location.y
232 | matrix[2, 3] = location.z
233 | matrix[0, 0] = c_p * c_y
234 | matrix[0, 1] = c_y * s_p * s_r - s_y * c_r
235 | matrix[0, 2] = -c_y * s_p * c_r - s_y * s_r
236 | matrix[1, 0] = s_y * c_p
237 | matrix[1, 1] = s_y * s_p * s_r + c_y * c_r
238 | matrix[1, 2] = -s_y * s_p * c_r + c_y * s_r
239 | matrix[2, 0] = s_p
240 | matrix[2, 1] = -c_p * s_r
241 | matrix[2, 2] = c_p * c_r
242 | return matrix
243 |
244 |
245 |
246 |
247 | # ==============================================================================
248 | # -- VehicleBoundingBoxes ---------------------------------------------------
249 | # ==============================================================================
250 |
251 |
252 | class VehicleBoundingBoxes(object):
253 | """
254 | This is a module responsible for creating 3D bounding boxes and drawing them
255 | client-side on pygame surface.
256 | """
257 |
258 | @staticmethod
259 | def get_bounding_boxes(vehicles, camera):
260 | """
261 | Creates 3D bounding boxes based on carla vehicle list and camera.
262 | """
263 |
264 | bounding_boxes = [VehicleBoundingBoxes.get_bounding_box(vehicle, camera) for vehicle in vehicles]
265 | # filter objects behind camera
266 | bounding_boxes = [bb for bb in bounding_boxes if all(bb[:, 2] > 0)]
267 | return bounding_boxes
268 |
269 | @staticmethod
270 | def draw_bounding_boxes(display, bounding_boxes):
271 | """
272 | Draws bounding boxes on pygame display.
273 | """
274 | global vehicle_bbox_record
275 | global count
276 |
277 | bb_surface = pygame.Surface((VIEW_WIDTH, VIEW_HEIGHT))
278 | bb_surface.set_colorkey((0, 0, 0))
279 |
280 | if vehicle_bbox_record == True:
281 | f = open("VehicleBBox/bbox"+str(count), 'w')
282 | print("VehicleBoundingBox")
283 | for bbox in bounding_boxes:
284 | points = [(int(bbox[i, 0]), int(bbox[i, 1])) for i in range(8)]
285 |
286 | if vehicle_bbox_record == True:
287 | f.write(str(points)+"\n")
288 |
289 | if vehicle_bbox_record == True:
290 | f.close()
291 | vehicle_bbox_record = False
292 |
293 |
294 |
295 | display.blit(bb_surface, (0, 0))
296 |
297 | @staticmethod
298 | def get_bounding_box(vehicle, camera):
299 | """
300 | Returns 3D bounding box for a vehicle based on camera view.
301 | """
302 |
303 | bb_cords = VehicleBoundingBoxes._create_bb_points(vehicle)
304 | cords_x_y_z = VehicleBoundingBoxes._vehicle_to_sensor(bb_cords, vehicle, camera)[:3, :]
305 | cords_y_minus_z_x = np.concatenate([cords_x_y_z[1, :], -cords_x_y_z[2, :], cords_x_y_z[0, :]])
306 | bbox = np.transpose(np.dot(camera.calibration, cords_y_minus_z_x))
307 | camera_bbox = np.concatenate([bbox[:, 0] / bbox[:, 2], bbox[:, 1] / bbox[:, 2], bbox[:, 2]], axis=1)
308 | return camera_bbox
309 |
310 | @staticmethod
311 | def _create_bb_points(vehicle):
312 | """
313 | Returns 3D bounding box for a vehicle.
314 | """
315 |
316 | cords = np.zeros((8, 4))
317 | extent = vehicle.bounding_box.extent
318 | cords[0, :] = np.array([extent.x, extent.y, -extent.z, 1])
319 | cords[1, :] = np.array([-extent.x, extent.y, -extent.z, 1])
320 | cords[2, :] = np.array([-extent.x, -extent.y, -extent.z, 1])
321 | cords[3, :] = np.array([extent.x, -extent.y, -extent.z, 1])
322 | cords[4, :] = np.array([extent.x, extent.y, extent.z, 1])
323 | cords[5, :] = np.array([-extent.x, extent.y, extent.z, 1])
324 | cords[6, :] = np.array([-extent.x, -extent.y, extent.z, 1])
325 | cords[7, :] = np.array([extent.x, -extent.y, extent.z, 1])
326 | return cords
327 |
328 | @staticmethod
329 | def _vehicle_to_sensor(cords, vehicle, sensor):
330 | """
331 | Transforms coordinates of a vehicle bounding box to sensor.
332 | """
333 |
334 | world_cord = VehicleBoundingBoxes._vehicle_to_world(cords, vehicle)
335 | sensor_cord = VehicleBoundingBoxes._world_to_sensor(world_cord, sensor)
336 | return sensor_cord
337 |
338 | @staticmethod
339 | def _vehicle_to_world(cords, vehicle):
340 | """
341 | Transforms coordinates of a vehicle bounding box to world.
342 | """
343 |
344 | bb_transform = carla.Transform(vehicle.bounding_box.location)
345 | bb_vehicle_matrix = VehicleBoundingBoxes.get_matrix(bb_transform)
346 | vehicle_world_matrix = VehicleBoundingBoxes.get_matrix(vehicle.get_transform())
347 | bb_world_matrix = np.dot(vehicle_world_matrix, bb_vehicle_matrix)
348 | world_cords = np.dot(bb_world_matrix, np.transpose(cords))
349 | return world_cords
350 |
351 | @staticmethod
352 | def _world_to_sensor(cords, sensor):
353 | """
354 | Transforms world coordinates to sensor.
355 | """
356 |
357 | sensor_world_matrix = VehicleBoundingBoxes.get_matrix(sensor.get_transform())
358 | world_sensor_matrix = np.linalg.inv(sensor_world_matrix)
359 | sensor_cords = np.dot(world_sensor_matrix, cords)
360 | return sensor_cords
361 |
362 | @staticmethod
363 | def get_matrix(transform):
364 | """
365 | Creates matrix from carla transform.
366 | """
367 |
368 | rotation = transform.rotation
369 | location = transform.location
370 | c_y = np.cos(np.radians(rotation.yaw))
371 | s_y = np.sin(np.radians(rotation.yaw))
372 | c_r = np.cos(np.radians(rotation.roll))
373 | s_r = np.sin(np.radians(rotation.roll))
374 | c_p = np.cos(np.radians(rotation.pitch))
375 | s_p = np.sin(np.radians(rotation.pitch))
376 | matrix = np.matrix(np.identity(4))
377 | matrix[0, 3] = location.x
378 | matrix[1, 3] = location.y
379 | matrix[2, 3] = location.z
380 | matrix[0, 0] = c_p * c_y
381 | matrix[0, 1] = c_y * s_p * s_r - s_y * c_r
382 | matrix[0, 2] = -c_y * s_p * c_r - s_y * s_r
383 | matrix[1, 0] = s_y * c_p
384 | matrix[1, 1] = s_y * s_p * s_r + c_y * c_r
385 | matrix[1, 2] = -s_y * s_p * c_r + c_y * s_r
386 | matrix[2, 0] = s_p
387 | matrix[2, 1] = -c_p * s_r
388 | matrix[2, 2] = c_p * c_r
389 | return matrix
390 |
391 |
392 | # ==============================================================================
393 | # -- BasicSynchronousClient ----------------------------------------------------
394 | # ==============================================================================
395 |
396 |
397 | class BasicSynchronousClient(object):
398 | """
399 | Basic implementation of a synchronous client.
400 | """
401 |
402 | def __init__(self):
403 | self.client = None
404 | self.world = None
405 | self.camera = None
406 | self.camera_segmentation = None
407 | self.car = None
408 |
409 | self.display = None
410 | self.image = None
411 | self.segmentation_image = None
412 |
413 | self.capture = True
414 | self.capture_segmentation = True
415 |
416 | self.record = True
417 | self.seg_record = False
418 | self.rgb_record = False
419 |
420 | self.screen_capture = 0
421 | self.loop_state = False
422 |
423 | def camera_blueprint(self, filter):
424 | """
425 | Returns camera blueprint.
426 | """
427 |
428 | camera_bp = self.world.get_blueprint_library().find(filter)
429 | camera_bp.set_attribute('image_size_x', str(VIEW_WIDTH))
430 | camera_bp.set_attribute('image_size_y', str(VIEW_HEIGHT))
431 | camera_bp.set_attribute('fov', str(VIEW_FOV))
432 | return camera_bp
433 |
434 | def set_synchronous_mode(self, synchronous_mode):
435 | """
436 | Sets synchronous mode.
437 | """
438 |
439 | settings = self.world.get_settings()
440 | settings.synchronous_mode = synchronous_mode
441 | self.world.apply_settings(settings)
442 |
443 | def setup_car(self):
444 | """
445 | Spawns actor-vehicle to be controled.
446 | """
447 |
448 | car_bp = self.world.get_blueprint_library().filter('vehicle.*')[0]
449 | location = random.choice(self.world.get_map().get_spawn_points())
450 | self.car = self.world.spawn_actor(car_bp, location)
451 |
452 | def setup_camera(self):
453 | """
454 | Spawns actor-camera to be used to render view.
455 | Sets calibration for client-side boxes rendering.
456 | """
457 |
458 |
459 | seg_transform = carla.Transform(carla.Location(x=1.6, z=1.7), carla.Rotation(pitch=-15))
460 | self.camera_segmentation = self.world.spawn_actor(self.camera_blueprint('sensor.camera.semantic_segmentation'), seg_transform, attach_to=self.car)
461 | weak_self = weakref.ref(self)
462 | self.camera_segmentation.listen(lambda image_seg: weak_self().set_segmentation(weak_self, image_seg))
463 |
464 | #camera_transform = carla.Transform(carla.Location(x=1.5, z=2.8), carla.Rotation(pitch=-15))
465 | camera_transform = carla.Transform(carla.Location(x=1.6, z=1.7), carla.Rotation(pitch=-15))
466 | self.camera = self.world.spawn_actor(self.camera_blueprint('sensor.camera.rgb'), camera_transform, attach_to=self.car)
467 | weak_self = weakref.ref(self)
468 | self.camera.listen(lambda image: weak_self().set_image(weak_self, image))
469 |
470 | calibration = np.identity(3)
471 | calibration[0, 2] = VIEW_WIDTH / 2.0
472 | calibration[1, 2] = VIEW_HEIGHT / 2.0
473 | calibration[0, 0] = calibration[1, 1] = VIEW_WIDTH / (2.0 * np.tan(VIEW_FOV * np.pi / 360.0))
474 | self.camera.calibration = calibration
475 | self.camera_segmentation.calibration = calibration
476 |
477 | def control(self, car):
478 | """
479 | Applies control to main car based on pygame pressed keys.
480 | Will return True If ESCAPE is hit, otherwise False to end main loop.
481 | """
482 | keys = pygame.key.get_pressed()
483 |
484 | if keys[K_ESCAPE]:
485 | return True
486 |
487 | control = car.get_control()
488 | control.throttle = 0
489 | if keys[K_w]:
490 | control.throttle = 1
491 | control.reverse = False
492 | elif keys[K_s]:
493 | control.throttle = 1
494 | control.reverse = True
495 | if keys[K_a]:
496 | control.steer = max(-1., min(control.steer - 0.05, 0))
497 | elif keys[K_d]:
498 | control.steer = min(1., max(control.steer + 0.05, 0))
499 | else:
500 | control.steer = 0
501 | if keys[K_p]:
502 | car.set_autopilot(True)
503 | if keys[K_c]:
504 | self.screen_capture = self.screen_capture + 1
505 | else:
506 | self.screen_capture = 0
507 |
508 | if keys[K_l]:
509 | self.loop_state = True
510 | if keys[K_l] and (pygame.key.get_mods() & pygame.KMOD_SHIFT):
511 | self.loop_state = False
512 | control.hand_brake = keys[K_SPACE]
513 |
514 | car.apply_control(control)
515 | return False
516 |
517 | @staticmethod
518 | def set_image(weak_self, img):
519 | """
520 | Sets image coming from camera sensor.
521 | The self.capture flag is a mean of synchronization - once the flag is
522 | set, next coming image will be stored.
523 | """
524 | self = weak_self()
525 | if self.capture:
526 | self.image = img
527 | self.capture = False
528 |
529 | if self.rgb_record:
530 | i = np.array(img.raw_data)
531 | i2 = i.reshape((VIEW_HEIGHT, VIEW_WIDTH, 4))
532 | i3 = i2[:, :, :3]
533 | cv2.imwrite('custom_data/image' + str(self.image_count) + '.png', i3)
534 | print("RGB(custom)Image")
535 |
536 | @staticmethod
537 | def set_segmentation(weak_self, img):
538 | """
539 | Sets image coming from camera sensor.
540 | The self.capture flag is a mean of synchronization - once the flag is
541 | set, next coming image will be stored.
542 | """
543 |
544 | self = weak_self()
545 | if self.capture_segmentation:
546 | self.segmentation_image = img
547 | self.capture_segmentation = False
548 |
549 |
550 | if self.seg_record:
551 | img.convert(cc.CityScapesPalette)
552 | i = np.array(img.raw_data)
553 | i2 = i.reshape((VIEW_HEIGHT, VIEW_WIDTH, 4))
554 | i3 = i2[:, :, :3]
555 | cv2.imwrite('SegmentationImage/seg' + str(self.image_count) +'.png', i3)
556 | print("SegmentationImage")
557 |
558 | def render(self, display):
559 | """
560 | Transforms image from camera sensor and blits it to main pygame display.
561 | """
562 |
563 | if self.image is not None:
564 | array = np.frombuffer(self.image.raw_data, dtype=np.dtype("uint8"))
565 | array = np.reshape(array, (self.image.height, self.image.width, 4))
566 | array = array[:, :, :3]
567 | array = array[:, :, ::-1]
568 | surface = pygame.surfarray.make_surface(array.swapaxes(0, 1))
569 | display.blit(surface, (0, 0))
570 |
571 | def game_loop(self, args):
572 | """
573 | Main program loop.
574 | """
575 |
576 | try:
577 | pygame.init()
578 |
579 | self.client = carla.Client('127.0.0.1', 2000)
580 | self.client.set_timeout(2.0)
581 | self.world = self.client.get_world()
582 |
583 | self.setup_car()
584 | self.setup_camera()
585 |
586 | self.display = pygame.display.set_mode((VIEW_WIDTH, VIEW_HEIGHT), pygame.HWSURFACE | pygame.DOUBLEBUF)
587 | pygame_clock = pygame.time.Clock()
588 |
589 | self.set_synchronous_mode(True)
590 | vehicles = self.world.get_actors().filter('vehicle.*')
591 | pedestrians = self.world.get_actors().filter('walker.pedestrian.*')
592 |
593 |
594 | self.image_count = 0
595 | self.time_interval = 0
596 |
597 | global vehicle_bbox_record
598 | global pedestrian_bbox_record
599 | global count
600 |
601 | while True:
602 | self.world.tick()
603 |
604 | self.capture = True
605 | pygame_clock.tick_busy_loop(60)
606 |
607 | self.render(self.display)
608 |
609 | self.time_interval += 1
610 | if ((self.time_interval % args.CaptureLoop) == 0 and self.loop_state):
611 | self.image_count = self.image_count + 1
612 | self.rgb_record = True
613 | self.seg_record = True
614 | vehicle_bbox_record = True
615 | pedestrian_bbox_record = True
616 | count = self.image_count
617 | print("-------------------------------------------------")
618 | print("ImageCount - %d" %self.image_count)
619 |
620 | if self.screen_capture == 1:
621 | self.image_count = self.image_count + 1
622 | self.rgb_record = True
623 | self.seg_record = True
624 | vehicle_bbox_record = True
625 | pedestrian_bbox_record = True
626 | count = self.image_count
627 | print("-------------------------------------------------")
628 | print("Captured! ImageCount - %d" %self.image_count)
629 |
630 | bounding_boxes = VehicleBoundingBoxes.get_bounding_boxes(vehicles, self.camera)
631 | pedestrian_bounding_boxes = PedestrianBoundingBoxes.get_bounding_boxes(pedestrians, self.camera)
632 |
633 | VehicleBoundingBoxes.draw_bounding_boxes(self.display, bounding_boxes)
634 | PedestrianBoundingBoxes.draw_bounding_boxes(self.display, pedestrian_bounding_boxes)
635 |
636 | time.sleep(0.03)
637 | self.rgb_record = False
638 | self.seg_record = False
639 | pygame.display.flip()
640 |
641 | pygame.event.pump()
642 | if self.control(self.car):
643 | return
644 |
645 | finally:
646 | self.set_synchronous_mode(False)
647 | self.camera.destroy()
648 | self.camera_segmentation.destroy()
649 | self.car.destroy()
650 | pygame.quit()
651 |
652 |
653 | # ==============================================================================
654 | # -- main() --------------------------------------------------------------------
655 | # ==============================================================================
656 |
657 | def main():
658 | """
659 | Initializes the client-side bounding box demo.
660 | """
661 | argparser = argparse.ArgumentParser(
662 | description=__doc__)
663 | argparser.add_argument(
664 | '-l', '--CaptureLoop',
665 | metavar='N',
666 | default=100,
667 | type=int,
668 | help='set Capture Cycle settings, Recommand : above 100')
669 |
670 | args = argparser.parse_args()
671 |
672 | print(__doc__)
673 |
674 | try:
675 | client = BasicSynchronousClient()
676 | client.game_loop(args)
677 | finally:
678 | print('EXIT')
679 |
680 |
681 | if __name__ == '__main__':
682 | main()
683 |
--------------------------------------------------------------------------------