├── .gitmodules ├── README.md ├── img └── uvc_lamp_sim.png ├── src ├── __pycache__ │ ├── customcnn.cpython-35.pyc │ └── turtlebot3_env_pixels.cpython-35.pyc ├── customcnn.py ├── deepq_custom_tutlebot_tensorboard │ └── DQN_1 │ │ └── events.out.tfevents.1648451679.havard-UX461UN ├── plotResults.py ├── train_dqn.py └── turtlebot3_env_pixels.py └── turtleRL_ws ├── .catkin_workspace ├── map_tf ├── map_transform.pkl ├── test.pkl └── world_tf.pkl ├── map_transform.pkl ├── maps ├── circuit_simple.pgm ├── circuit_simple.yaml ├── small_house_15cm.pgm ├── small_house_15cm.yaml ├── small_house_20cm.pgm ├── small_house_20cm.yaml ├── small_room.pgm ├── small_room.yaml ├── small_room_40cm.pgm ├── small_room_40cm.yaml ├── stage1.pgm ├── stage1.yaml ├── stage_4_no_obstacle.pgm └── stage_4_no_obstacle.yaml └── src ├── CMakeLists.txt └── map_constant_tf ├── CMakeLists.txt ├── nodes ├── broadcastMaptf.py └── saveMaptf.py └── package.xml /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "ROS-UV-Cleaning-Robot"] 2 | path = ROS-UV-Cleaning-Robot 3 | url = https://github.com/h-brenne/ROS-UV-Cleaning-Robot.git 4 | branch = RL_low_rate 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # cppRL 2 | Coverage path planning with Reinforcement Learning. Using ROS, controlling a TurtleBot3 with a UV lamp to cover an area with a minimum UV energy to kill coronavirus. Gym environment implementation were based on the implementation from [gym-turtlebot3](https://github.com/ITTcs/gym-turtlebot3) 3 | ![Alt text](img/uvc_lamp_sim.png?raw=true "sim") 4 | 5 | ## Setup and Instalation 6 | ROS is required for the simulation. Requires ROS-UV-Cleaning-Robot, can be pulled as a submodule with `git submodule update --init --recursive` 7 | 8 | I used a docker container for Tensorflow and Baselines. Using the Baselines zoo CUDA image, I installed the required ROS packages with rospypi, so that the ROS installation is not needed in the container. 9 | I launched my container with `docker run --runtime=nvidia --env="DISPLAY=${localEnv:DISPLAY}" --volume="/tmp/.X11-unix:/tmp/.X11-unix" --net=host -v /home/$USER/cppRL:/usr/share/cppRL -it stablebaselines/rl-baselines-zoo` This exposes the network adapter for ROS communication, adds a shared folder where the project is, as well as a display. 10 | 11 | In addition, the UVC lamp workspace needs to be sourced in the docker container `source ROS-UV-Cleaning-Robot/devel/setup.sh`. 12 | 13 | To launch, a gazebo simulator is needed, a pre-made map must be published with map_server, and the uvc_lamp node needs to be launched. If angle control(instead of angle rate control) is used, the angle_controller in ROS-UV-Cleaning-Robot needs to be run. 14 | 15 | Two maps have been tested with the RL agent, small-room and small-house. 16 | 17 | ### From turtleRL_ws, after building and sourcing: 18 | 19 | Launch Gazebo with a map: `roslaunch turtlebot3_gazebo turtlebot3_small_room.launch` 20 | 21 | Launch map_server with the same map: `rosrun map_server map_server maps/small_room_40cm.yaml` 22 | 23 | 24 | ### From ROS-UV-Cleaning-Robot, after building and sourcing: 25 | 26 | Launch uv_lamp: `roslaunch uvc_lamp uvc_grid_map.launch` 27 | 28 | Launch angle_controller: `rosrun angle_contoller angle_controller.py` 29 | 30 | -------------------------------------------------------------------------------- /img/uvc_lamp_sim.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/img/uvc_lamp_sim.png -------------------------------------------------------------------------------- /src/__pycache__/customcnn.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/src/__pycache__/customcnn.cpython-35.pyc -------------------------------------------------------------------------------- /src/__pycache__/turtlebot3_env_pixels.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/src/__pycache__/turtlebot3_env_pixels.cpython-35.pyc -------------------------------------------------------------------------------- /src/customcnn.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | from stable_baselines.a2c.utils import conv, linear, conv_to_fc 4 | from stable_baselines.deepq.policies import FeedForwardPolicy 5 | 6 | def custom_cnn(scaled_images, **kwargs): 7 | activ = tf.nn.leaky_relu 8 | layer_1 = activ(conv(scaled_images, 'c1', n_filters=16, filter_size=3, stride=1, init_scale=np.sqrt(2), **kwargs)) 9 | layer_2 = activ(conv(layer_1, 'c2', n_filters=32, filter_size=3, stride=1, init_scale=np.sqrt(2), **kwargs)) 10 | layer_2 = conv_to_fc(layer_2) 11 | return activ(linear(layer_2, 'fc1', n_hidden=64, init_scale=np.sqrt(2))) -------------------------------------------------------------------------------- /src/deepq_custom_tutlebot_tensorboard/DQN_1/events.out.tfevents.1648451679.havard-UX461UN: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/src/deepq_custom_tutlebot_tensorboard/DQN_1/events.out.tfevents.1648451679.havard-UX461UN -------------------------------------------------------------------------------- /src/plotResults.py: -------------------------------------------------------------------------------- 1 | from email.utils import decode_rfc2231 2 | import pandas as pd 3 | import matplotlib.pyplot as plt 4 | import numpy as np 5 | 6 | df11 = pd.read_csv("run-end-rewards-coverage_percentage.csv") 7 | df12 = pd.read_csv("run-end-rewards-episode_reward.csv") 8 | df13 = pd.read_csv("run-end-rewards-overlap_percentage.csv") 9 | df21 = pd.read_csv("run-step-rewards-coverage_percentage.csv") 10 | df22 = pd.read_csv("run-step-rewards-episode_reward.csv") 11 | df23 = pd.read_csv("run-step-rewards-overlap_percentage.csv") 12 | df3 = pd.read_csv("run-random-episode_reward.csv") 13 | 14 | fig = plt.figure() 15 | """ 16 | plt.plot(df22["Step"][0:500], df22["Value"][0:500], alpha=0.2, label="Training") 17 | plt.plot(df22["Step"][0:500], df22.ewm(alpha=(1-0.85)).mean()["Value"][0:500], label="Training smoothed") 18 | plt.plot(df3["Step"], df3.ewm(alpha=(1-0.85)).mean()["Value"], label="Random actions smoothed") 19 | plt.title("Episode reward") 20 | plt.xlabel("Step") 21 | plt.ylabel("Reward") 22 | plt.legend() 23 | """ 24 | 25 | plt.subplot(1,2,1) 26 | plt.plot(df11["Step"], df11.ewm(alpha=(1-0.85)).mean()["Value"], label="End reward") 27 | plt.plot(df21["Step"][0:500], df21.ewm(alpha=(1-0.85)).mean()["Value"][0:500], label="Step reward") 28 | 29 | plt.title("Coverage Percentage") 30 | plt.xlabel("Step") 31 | plt.ylabel("Percentage") 32 | plt.legend() 33 | 34 | plt.subplot(1,2,2) 35 | 36 | plt.plot(df13["Step"], df13.ewm(alpha=(1-0.85)).mean()["Value"], label="End reward") 37 | plt.plot(df23["Step"][0:500], df23.ewm(alpha=(1-0.85)).mean()["Value"][0:500], label="Step reward") 38 | 39 | plt.title("Overlap Percentage") 40 | plt.xlabel("Step") 41 | plt.ylabel("Percentage") 42 | plt.legend() 43 | 44 | 45 | plt.show() -------------------------------------------------------------------------------- /src/train_dqn.py: -------------------------------------------------------------------------------- 1 | import gym 2 | import rospy 3 | import tensorflow as tf 4 | 5 | from turtlebot3_env_pixels import TurtleBot3EnvPixels 6 | 7 | from stable_baselines.deepq.policies import CnnPolicy, MlpPolicy 8 | from stable_baselines.common.vec_env import DummyVecEnv 9 | from stable_baselines import DQN 10 | 11 | from stable_baselines.common.callbacks import BaseCallback 12 | 13 | from gym.envs.registration import register 14 | 15 | from customcnn import custom_cnn 16 | 17 | register( 18 | id='TurtleBot3_Pixels-v0', 19 | entry_point=TurtleBot3EnvPixels 20 | ) 21 | 22 | 23 | policy_kwargs = dict(cnn_extractor=custom_cnn) 24 | env_name = 'TurtleBot3_Pixels-v0' 25 | 26 | rospy.init_node(env_name.replace('-', '_')) 27 | 28 | env = gym.make(env_name) 29 | env = DummyVecEnv([lambda: env]) 30 | 31 | model = DQN(CnnPolicy, env, prioritized_replay=True, policy_kwargs=policy_kwargs, 32 | buffer_size=100000, learning_rate=0.0003, target_network_update_freq=8000, learning_starts=1000, 33 | exploration_fraction=0.2, exploration_final_eps=0, verbose=1, tensorboard_log="./deepq_custom_tutlebot_tensorboard/") 34 | #model.load(env_name+"_custom_dqn") 35 | class TensorboardCallback(BaseCallback): 36 | """ 37 | Custom callback for plotting additional values in tensorboard. 38 | """ 39 | def __init__(self, verbose=0): 40 | self.is_tb_set = False 41 | super(TensorboardCallback, self).__init__(verbose) 42 | self.coverage_percentage = 0 43 | self.overlap_percentage = 0 44 | 45 | def _on_rollout_end(self) -> bool: 46 | if(self.training_env.get_attr("coverage_percentage")[0] != self.coverage_percentage 47 | or self.training_env.get_attr("overlap_percentage")[0] != self.overlap_percentage): 48 | self.coverage_percentage = self.training_env.get_attr("coverage_percentage")[0] 49 | self.overlap_percentage = self.training_env.get_attr("overlap_percentage")[0] 50 | 51 | summary = tf.Summary(value=[tf.Summary.Value(tag='coverage_percentage', simple_value=self.coverage_percentage)]) 52 | self.locals['writer'].add_summary(summary, self.num_timesteps) 53 | summary = tf.Summary(value=[tf.Summary.Value(tag='overlap_percentage', simple_value=self.overlap_percentage)]) 54 | self.locals['writer'].add_summary(summary, self.num_timesteps) 55 | return True 56 | 57 | 58 | model.learn(total_timesteps=int(3e5), log_interval=100, callback=TensorboardCallback()) 59 | #model.save(env_name+"_custom_dqn") 60 | -------------------------------------------------------------------------------- /src/turtlebot3_env_pixels.py: -------------------------------------------------------------------------------- 1 | import gym 2 | import rospy 3 | import numpy as np 4 | import math 5 | import time 6 | import os 7 | from geometry_msgs.msg import Twist, Point, Pose 8 | from sensor_msgs.msg import LaserScan 9 | from nav_msgs.msg import Odometry 10 | from std_msgs.msg import Int32 11 | from std_msgs.msg import Float32 12 | from std_msgs.msg import Float32MultiArray 13 | from std_srvs.srv import Empty 14 | from uvc_lamp.srv import Reset_uv 15 | from matplotlib import pyplot as plt 16 | 17 | from gym import spaces 18 | from gym.utils import seeding 19 | from tf.transformations import euler_from_quaternion 20 | 21 | class TurtleBot3EnvPixels(gym.Env): 22 | 23 | def __init__(self, 24 | goal_list=None, 25 | max_env_size=None, 26 | continuous=False, 27 | use_angle_vel=False, 28 | use_pixels_space=True, 29 | only_reward_end = True, 30 | reduce_coverage_state = False, 31 | lidar_size=24, #24 lidar 32 | action_size=4, 33 | min_range = 0.015, 34 | max_range = 3.5, 35 | min_ang_vel = -1.5, 36 | max_ang_vel = 1.5, 37 | min_cmd_ang = -math.pi, 38 | max_cmd_ang = math.pi, 39 | const_linear_vel = 0.15, 40 | collision_distance = 0.165, 41 | reward_goal=0, 42 | reward_covered=1, 43 | reward_collision=-100, 44 | robot_pos = 0, 45 | covered=0, 46 | covered_indicies = None, 47 | grids_to_cover=0, 48 | percentage_coverage_goal=0.6, 49 | covered_previous=0, 50 | covered_twice = 0, 51 | covered_twice_previous=0, 52 | map_width = 0, 53 | map_heigth = 0, 54 | pos_map = None, 55 | maps = None 56 | ): 57 | 58 | self.goal_x = 0 59 | self.goal_y = 0 60 | self.heading = 0 61 | self.initGoal = True 62 | self.get_goal = False 63 | self.use_angle_vel = use_angle_vel 64 | self.use_pixels_space = use_pixels_space 65 | self.reduce_coverage_state = reduce_coverage_state 66 | self.only_reward_end = only_reward_end 67 | self.position = Pose() 68 | self.pub_cmd_vel = rospy.Publisher('cmd_vel', Twist, queue_size=5) 69 | self.pub_cmd_ang = rospy.Publisher('cmd_ang', Float32, queue_size=5) 70 | self.sub_odom = rospy.Subscriber('odom', Odometry, self.getOdometry) 71 | 72 | 73 | #Coverage path planning observation 74 | self.robot_pos = robot_pos 75 | self.covered = covered 76 | self.covered_indicies = covered_indicies 77 | self.grids_to_cover = grids_to_cover 78 | self.percentage_coverage_goal = percentage_coverage_goal 79 | self.covered_previous = covered_previous 80 | self.covered_twice = covered_twice 81 | self.covered_twice_previous = covered_twice_previous 82 | self.reward_covered=reward_covered 83 | self.coverage_percentage = 0 84 | self.overlap_percentage = 0 85 | 86 | self.sub_pos_map = rospy.Subscriber("maps", Float32MultiArray, self.getMaps) 87 | self.pos_map = pos_map 88 | self.maps = maps 89 | self.reset_proxy = rospy.ServiceProxy('gazebo/reset_simulation', Empty) 90 | self.unpause_proxy = rospy.ServiceProxy('gazebo/unpause_physics', Empty) 91 | self.pause_proxy = rospy.ServiceProxy('gazebo/pause_physics', Empty) 92 | 93 | 94 | self.lidar_size = lidar_size 95 | self.const_linear_vel = const_linear_vel 96 | self.min_range = min_range 97 | self.max_range = max_range 98 | self.min_ang_vel = min_ang_vel 99 | self.max_ang_vel = max_ang_vel 100 | self.min_cmd_ang = min_cmd_ang 101 | self.max_cmd_ang = max_cmd_ang 102 | self.collision_distance = collision_distance 103 | self.reward_goal = reward_goal 104 | self.reward_collision = reward_collision 105 | self.continuous = continuous 106 | self.max_env_size = max_env_size 107 | self.map_width = map_width 108 | self.map_heigth = map_heigth 109 | 110 | 111 | if self.continuous: 112 | low, high, shape_value = self.get_action_space_values() 113 | self.action_space = spaces.Box(low=low, high=high, shape=(shape_value,), dtype=np.float32) 114 | else: 115 | self.action_space = spaces.Discrete(action_size) 116 | if self.use_angle_vel: 117 | ang_step = max_ang_vel/((action_size - 1)/2) 118 | else: 119 | ang_step = 2/action_size 120 | self.actions = [-1+action * ang_step for action in range(action_size)] 121 | 122 | while(self.map_heigth==0): 123 | time.sleep(0.5) 124 | if use_pixels_space: 125 | low, high = self.get_pixel_observation_space_values() 126 | self.observation_space = spaces.Box(low, high, dtype=np.uint8) 127 | else: 128 | low, high = self.get_observation_space_values() 129 | self.observation_space = spaces.Box(low, high, dtype=np.uint8) 130 | 131 | self.num_timesteps = 0 132 | self.lidar_distances = None 133 | self.ang_vel = 0 134 | self.cmd_ang = 0 135 | 136 | self.start_time = time.time() 137 | self.last_step_time = self.start_time 138 | 139 | self.seed() 140 | 141 | def seed(self, seed=None): 142 | self.np_random, seed = seeding.np_random(seed) 143 | return [seed] 144 | 145 | def get_action_space_values(self): 146 | if self.use_angle_vel: 147 | low = self.min_ang_vel 148 | high = self.max_ang_vel 149 | else: 150 | low = -1 151 | high = 1 152 | shape_value = 1 153 | 154 | return low, high, shape_value 155 | 156 | def get_pixel_observation_space_values(self): 157 | low = np.full((self.map_width, self.map_heigth, 3), 0) 158 | high = np.full((self.map_width, self.map_heigth, 3), 255) 159 | """if self.reduce_coverage_state: 160 | high = np.append(high, np.full((self.map_width, self.map_heigth, 1), 255), axis=2) 161 | else: 162 | high = np.append(high, np.full((self.map_width, self.map_heigth, 1), 255), axis=2)""" 163 | return low, high 164 | 165 | def get_observation_space_values(self): 166 | low = np.append(np.full(self.lidar_size, self.min_range), 0) 167 | low = np.append(low, np.full((self.map_width, self.map_heigth, 1), 0).flatten()) 168 | high = np.append(np.full(self.lidar_size, self.max_range), self.map_heigth*self.map_width) 169 | high = np.append(high, np.full((self.map_width, self.map_heigth, 1), 1).flatten()) 170 | #low = np.full(self.lidar_size, self.min_range) 171 | #high = np.full(self.lidar_size, self.max_range) 172 | return low, high 173 | 174 | def _getGoalDistace(self): 175 | goal_distance = round(math.hypot(self.goal_x - self.position.x, self.goal_y - self.position.y), 2) 176 | return goal_distance 177 | 178 | def getOdometry(self, odom): 179 | self.position = odom.pose.pose.position 180 | orientation = odom.pose.pose.orientation 181 | orientation_list = [orientation.x, orientation.y, orientation.z, orientation.w] 182 | _, _, yaw = euler_from_quaternion(orientation_list) 183 | 184 | 185 | heading = yaw 186 | if heading > math.pi: 187 | heading -= 2 * math.pi 188 | 189 | elif heading < -math.pi: 190 | heading += 2 * math.pi 191 | 192 | self.heading = heading 193 | 194 | def getMaps(self, maps): 195 | #Data format: 196 | #multiarray(i,j,k) = data[data_offset + dim_stride[1]*i + dim_stride[2]*j + k] 197 | 198 | self.map_width = maps.layout.dim[0].size 199 | self.map_heigth = maps.layout.dim[1].size 200 | 201 | #Put data in np array. Should be possible to do more efficient with numpy 202 | self.maps =np.ndarray((maps.layout.dim[0].size,maps.layout.dim[1].size,maps.layout.dim[2].size)) 203 | for i in range(maps.layout.dim[0].size): 204 | for j in range(maps.layout.dim[1].size): 205 | for k in range(maps.layout.dim[2].size): 206 | self.maps[i,j,k] = maps.data[maps.layout.dim[1].stride*i + maps.layout.dim[2].stride*j + k] 207 | #Save space when saving in buffer 208 | self.maps = self.maps.astype(np.uint8) 209 | 210 | #Set covered and overlapped cells 211 | covered = self.maps[:,:,1] 212 | 213 | #Scale to 255 214 | self.maps[:,:,1] = covered*8 215 | 216 | occupied_grids = (self.maps[:,:,0]>1).sum() 217 | self.grids_to_cover = covered.size - occupied_grids 218 | self.covered = (covered > 5).sum() 219 | self.covered_twice = (covered > 30).sum() 220 | 221 | #Reduce to binary 222 | if self.reduce_coverage_state: 223 | self.maps[:,:,1] = (covered > 10)*255 224 | 225 | if not self.use_pixels_space: 226 | robot_index = np.where(self.maps == 255) 227 | self.robot_pos = int(robot_index[0]+robot_index[1] * self.map_width) 228 | covered_indicies = np.where(covered > 10) 229 | self.covered_indicies = covered_indicies[0]+covered_indicies[1]*self.map_width 230 | 231 | #Debug 232 | #print(self.robot_pos) 233 | #np.savetxt("maps0.csv", self.maps[:,:,0]) 234 | #np.savetxt("maps1.csv", self.maps[:,:,1]) 235 | #plt.imsave('maps0.png', self.maps[:,:,0], vmin=0,vmax=255) 236 | #plt.imsave('maps1.png', self.maps[:,:,1], vmin=0,vmax=255) 237 | #plt.imsave('maps2.png', self.maps[:,:,2], vmin=0,vmax=255) 238 | def get_time_info(self): 239 | time_info = time.strftime("%H:%M:%S", time.gmtime(time.time() - self.start_time)) 240 | time_info += '-' + str(self.num_timesteps) 241 | return time_info 242 | 243 | def episode_finished(self): 244 | pass 245 | 246 | 247 | def getState(self, scan): 248 | scan_range = [] 249 | done = False 250 | 251 | for i in range(len(scan.ranges)): 252 | if scan.ranges[i] == float('Inf'): 253 | scan_range.append(self.max_range) 254 | elif np.isnan(scan.ranges[i]): 255 | scan_range.append(self.min_range) 256 | else: 257 | scan_range.append(scan.ranges[i]) 258 | 259 | self.lidar_distances = scan_range 260 | 261 | if min(self.lidar_distances) < self.collision_distance: 262 | print('Collision!! Covered grids: ', self.covered) 263 | done = True 264 | if self.covered>self.grids_to_cover*self.percentage_coverage_goal: 265 | print('Goal!! Covered grids: ', self.covered) 266 | self.get_goal = True 267 | done = True 268 | if self.use_pixels_space: 269 | return self.maps, done 270 | else: 271 | obs = self.lidar_distances + [self.robot_pos] 272 | obs = np.append(obs, self.maps[:,:,1].flatten()) 273 | return obs, done 274 | 275 | def setReward(self, done): 276 | 277 | if self.get_goal: 278 | reward = self.reward_covered*(self.covered - 0.4*self.covered_twice) 279 | reward = self.reward_goal 280 | self.get_goal = False 281 | self.pub_cmd_vel.publish(Twist()) 282 | elif done: 283 | reward = self.reward_collision + self.reward_covered*(self.covered - 0.4*self.covered_twice) 284 | self.pub_cmd_vel.publish(Twist()) 285 | else: 286 | if not self.only_reward_end: 287 | reward = self.reward_covered*((self.covered-self.covered_previous) - 0.4*(self.covered_twice-self.covered_twice_previous)) 288 | self.covered_previous = self.covered 289 | self.covered_twice_previous = self.covered_twice 290 | else: 291 | reward = 0 292 | #print("Reward:", reward) 293 | return reward 294 | 295 | def set_ang_vel(self, action): 296 | if self.use_angle_vel: 297 | if self.continuous: 298 | self.ang_vel = action 299 | else: 300 | self.ang_vel = self.actions[action] 301 | else: 302 | if self.continuous: 303 | self.cmd_ang = math.pi*action 304 | else: 305 | self.cmd_ang = math.pi*self.actions[action] 306 | 307 | def step(self, action): 308 | 309 | self.set_ang_vel(action) 310 | if self.use_angle_vel: 311 | vel_cmd = Twist() 312 | vel_cmd.linear.x = self.const_linear_vel 313 | vel_cmd.angular.z = self.ang_vel 314 | self.pub_cmd_vel.publish(vel_cmd) 315 | else: 316 | self.pub_cmd_ang.publish(self.cmd_ang) 317 | 318 | data = None 319 | while data is None: 320 | try: 321 | data = rospy.wait_for_message('scan', LaserScan, timeout=5) 322 | except: 323 | pass 324 | 325 | state, done = self.getState(data) 326 | reward = self.setReward(done) 327 | self.num_timesteps += 1 328 | 329 | return np.asarray(state), reward, done, {} 330 | 331 | 332 | def reset(self): 333 | self.coverage_percentage = 100*self.covered/self.grids_to_cover 334 | self.overlap_percentage = 100*self.covered_twice/self.grids_to_cover 335 | rospy.wait_for_service('gazebo/reset_simulation') 336 | try: 337 | self.reset_proxy() 338 | except rospy.ServiceException: 339 | print("gazebo/reset_simulation service call failed") 340 | 341 | uv_reset = False 342 | while(not uv_reset): 343 | try: 344 | rospy.wait_for_service('reset_uv', 10.0) 345 | reset_uv = rospy.ServiceProxy('reset_uv', Empty) 346 | reset_uv() 347 | self.covered_previous=0 348 | self.covered_twice_previous=0 349 | uv_reset = True 350 | break 351 | except (rospy.exceptions.ROSException, rospy.service.ServiceException): 352 | print("reset_uv service call failed, unpausing and retrying") 353 | try: 354 | rospy.wait_for_service('gazebo/unpause_physics', 2.0) 355 | print("unpausing") 356 | self.unpause_proxy() 357 | rospy.wait_for_service('reset_uv', 2.0) 358 | reset_uv = rospy.ServiceProxy('reset_uv', Empty) 359 | reset_uv() 360 | rospy.wait_for_service('gazebo/pause_physics', 2.0) 361 | self.pause_proxy() 362 | except (rospy.exceptions.ROSException, rospy.service.ServiceException): 363 | print("reset_uv or gazebo physics service call failing, retrying") 364 | 365 | data = None 366 | while data is None: 367 | try: 368 | data = rospy.wait_for_message('scan', LaserScan, timeout=5) 369 | except: 370 | pass 371 | state, _ = self.getState(data) 372 | 373 | return np.asarray(state) 374 | 375 | 376 | def render(self, mode=None): 377 | pass 378 | 379 | 380 | def close(self): 381 | self.reset() 382 | -------------------------------------------------------------------------------- /turtleRL_ws/.catkin_workspace: -------------------------------------------------------------------------------- 1 | # This file currently only serves to mark the location of a catkin workspace for tool integration 2 | -------------------------------------------------------------------------------- /turtleRL_ws/map_tf/map_transform.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/map_tf/map_transform.pkl -------------------------------------------------------------------------------- /turtleRL_ws/map_tf/test.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/map_tf/test.pkl -------------------------------------------------------------------------------- /turtleRL_ws/map_tf/world_tf.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/map_tf/world_tf.pkl -------------------------------------------------------------------------------- /turtleRL_ws/map_transform.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/map_transform.pkl -------------------------------------------------------------------------------- /turtleRL_ws/maps/circuit_simple.pgm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/maps/circuit_simple.pgm -------------------------------------------------------------------------------- /turtleRL_ws/maps/circuit_simple.yaml: -------------------------------------------------------------------------------- 1 | image: circuit_simple.pgm 2 | resolution: 0.050000 3 | origin: [-4.000000, -4.000000, 0.000000] 4 | negate: 0 5 | occupied_thresh: 0.65 6 | free_thresh: 0.196 7 | 8 | -------------------------------------------------------------------------------- /turtleRL_ws/maps/small_house_15cm.pgm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/maps/small_house_15cm.pgm -------------------------------------------------------------------------------- /turtleRL_ws/maps/small_house_15cm.yaml: -------------------------------------------------------------------------------- 1 | image: small_house_15cm.pgm 2 | resolution: 0.150000 3 | origin: [-4.050000, -4.050000, 0.000000] 4 | negate: 0 5 | occupied_thresh: 0.65 6 | free_thresh: 0.196 7 | 8 | -------------------------------------------------------------------------------- /turtleRL_ws/maps/small_house_20cm.pgm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/maps/small_house_20cm.pgm -------------------------------------------------------------------------------- /turtleRL_ws/maps/small_house_20cm.yaml: -------------------------------------------------------------------------------- 1 | image: small_house_20cm.pgm 2 | resolution: 0.200000 3 | origin: [-4.000000, -4.000000, 0.000000] 4 | negate: 0 5 | occupied_thresh: 0.65 6 | free_thresh: 0.196 7 | 8 | -------------------------------------------------------------------------------- /turtleRL_ws/maps/small_room.pgm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/maps/small_room.pgm -------------------------------------------------------------------------------- /turtleRL_ws/maps/small_room.yaml: -------------------------------------------------------------------------------- 1 | image: small_room.pgm 2 | resolution: 0.200000 3 | origin: [-4.000000, -4.000000, 0.000000] 4 | negate: 0 5 | occupied_thresh: 0.65 6 | free_thresh: 0.196 7 | 8 | -------------------------------------------------------------------------------- /turtleRL_ws/maps/small_room_40cm.pgm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/maps/small_room_40cm.pgm -------------------------------------------------------------------------------- /turtleRL_ws/maps/small_room_40cm.yaml: -------------------------------------------------------------------------------- 1 | image: small_room_40cm.pgm 2 | resolution: 0.400000 3 | origin: [-4.000000, -4.000000, 0.000000] 4 | negate: 0 5 | occupied_thresh: 0.65 6 | free_thresh: 0.196 7 | 8 | -------------------------------------------------------------------------------- /turtleRL_ws/maps/stage1.pgm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/maps/stage1.pgm -------------------------------------------------------------------------------- /turtleRL_ws/maps/stage1.yaml: -------------------------------------------------------------------------------- 1 | image: stage1.pgm 2 | resolution: 0.050000 3 | origin: [-10.000000, -10.000000, 0.000000] 4 | negate: 0 5 | occupied_thresh: 0.65 6 | free_thresh: 0.196 7 | 8 | -------------------------------------------------------------------------------- /turtleRL_ws/maps/stage_4_no_obstacle.pgm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/h-brenne/cppRL/9747dc7a49ae114f284f0802c1aa5737effe4918/turtleRL_ws/maps/stage_4_no_obstacle.pgm -------------------------------------------------------------------------------- /turtleRL_ws/maps/stage_4_no_obstacle.yaml: -------------------------------------------------------------------------------- 1 | image: stage_4_no_obstacle.pgm 2 | resolution: 0.050000 3 | origin: [-4.000000, -4.000000, 0.000000] 4 | negate: 0 5 | occupied_thresh: 0.65 6 | free_thresh: 0.196 7 | 8 | -------------------------------------------------------------------------------- /turtleRL_ws/src/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | /opt/ros/noetic/share/catkin/cmake/toplevel.cmake -------------------------------------------------------------------------------- /turtleRL_ws/src/map_constant_tf/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.0.2) 2 | project(map_constant_tf) 3 | 4 | ## Compile as C++11, supported in ROS Kinetic and newer 5 | # add_compile_options(-std=c++11) 6 | 7 | ## Find catkin macros and libraries 8 | ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz) 9 | ## is used, also find other catkin packages 10 | find_package(catkin REQUIRED COMPONENTS 11 | rospy 12 | ) 13 | 14 | ## System dependencies are found with CMake's conventions 15 | # find_package(Boost REQUIRED COMPONENTS system) 16 | 17 | 18 | ## Uncomment this if the package has a setup.py. This macro ensures 19 | ## modules and global scripts declared therein get installed 20 | ## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html 21 | # catkin_python_setup() 22 | 23 | ################################################ 24 | ## Declare ROS messages, services and actions ## 25 | ################################################ 26 | 27 | ## To declare and build messages, services or actions from within this 28 | ## package, follow these steps: 29 | ## * Let MSG_DEP_SET be the set of packages whose message types you use in 30 | ## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...). 31 | ## * In the file package.xml: 32 | ## * add a build_depend tag for "message_generation" 33 | ## * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET 34 | ## * If MSG_DEP_SET isn't empty the following dependency has been pulled in 35 | ## but can be declared for certainty nonetheless: 36 | ## * add a exec_depend tag for "message_runtime" 37 | ## * In this file (CMakeLists.txt): 38 | ## * add "message_generation" and every package in MSG_DEP_SET to 39 | ## find_package(catkin REQUIRED COMPONENTS ...) 40 | ## * add "message_runtime" and every package in MSG_DEP_SET to 41 | ## catkin_package(CATKIN_DEPENDS ...) 42 | ## * uncomment the add_*_files sections below as needed 43 | ## and list every .msg/.srv/.action file to be processed 44 | ## * uncomment the generate_messages entry below 45 | ## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...) 46 | 47 | ## Generate messages in the 'msg' folder 48 | # add_message_files( 49 | # FILES 50 | # Message1.msg 51 | # Message2.msg 52 | # ) 53 | 54 | ## Generate services in the 'srv' folder 55 | # add_service_files( 56 | # FILES 57 | # Service1.srv 58 | # Service2.srv 59 | # ) 60 | 61 | ## Generate actions in the 'action' folder 62 | # add_action_files( 63 | # FILES 64 | # Action1.action 65 | # Action2.action 66 | # ) 67 | 68 | ## Generate added messages and services with any dependencies listed here 69 | # generate_messages( 70 | # DEPENDENCIES 71 | # std_msgs # Or other packages containing msgs 72 | # ) 73 | 74 | ################################################ 75 | ## Declare ROS dynamic reconfigure parameters ## 76 | ################################################ 77 | 78 | ## To declare and build dynamic reconfigure parameters within this 79 | ## package, follow these steps: 80 | ## * In the file package.xml: 81 | ## * add a build_depend and a exec_depend tag for "dynamic_reconfigure" 82 | ## * In this file (CMakeLists.txt): 83 | ## * add "dynamic_reconfigure" to 84 | ## find_package(catkin REQUIRED COMPONENTS ...) 85 | ## * uncomment the "generate_dynamic_reconfigure_options" section below 86 | ## and list every .cfg file to be processed 87 | 88 | ## Generate dynamic reconfigure parameters in the 'cfg' folder 89 | # generate_dynamic_reconfigure_options( 90 | # cfg/DynReconf1.cfg 91 | # cfg/DynReconf2.cfg 92 | # ) 93 | 94 | ################################### 95 | ## catkin specific configuration ## 96 | ################################### 97 | ## The catkin_package macro generates cmake config files for your package 98 | ## Declare things to be passed to dependent projects 99 | ## INCLUDE_DIRS: uncomment this if your package contains header files 100 | ## LIBRARIES: libraries you create in this project that dependent projects also need 101 | ## CATKIN_DEPENDS: catkin_packages dependent projects also need 102 | ## DEPENDS: system dependencies of this project that dependent projects also need 103 | catkin_package( 104 | # INCLUDE_DIRS include 105 | # LIBRARIES map_constant_tf 106 | # CATKIN_DEPENDS rospy 107 | # DEPENDS system_lib 108 | ) 109 | 110 | ########### 111 | ## Build ## 112 | ########### 113 | 114 | ## Specify additional locations of header files 115 | ## Your package locations should be listed before other locations 116 | include_directories( 117 | # include 118 | ${catkin_INCLUDE_DIRS} 119 | ) 120 | 121 | ## Declare a C++ library 122 | # add_library(${PROJECT_NAME} 123 | # src/${PROJECT_NAME}/map_constant_tf.cpp 124 | # ) 125 | 126 | ## Add cmake target dependencies of the library 127 | ## as an example, code may need to be generated before libraries 128 | ## either from message generation or dynamic reconfigure 129 | # add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) 130 | 131 | ## Declare a C++ executable 132 | ## With catkin_make all packages are built within a single CMake context 133 | ## The recommended prefix ensures that target names across packages don't collide 134 | # add_executable(${PROJECT_NAME}_node src/map_constant_tf_node.cpp) 135 | 136 | ## Rename C++ executable without prefix 137 | ## The above recommended prefix causes long target names, the following renames the 138 | ## target back to the shorter version for ease of user use 139 | ## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node" 140 | # set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "") 141 | 142 | ## Add cmake target dependencies of the executable 143 | ## same as for the library above 144 | # add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) 145 | 146 | ## Specify libraries to link a library or executable target against 147 | # target_link_libraries(${PROJECT_NAME}_node 148 | # ${catkin_LIBRARIES} 149 | # ) 150 | 151 | ############# 152 | ## Install ## 153 | ############# 154 | 155 | # all install targets should use catkin DESTINATION variables 156 | # See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html 157 | 158 | ## Mark executable scripts (Python etc.) for installation 159 | ## in contrast to setup.py, you can choose the destination 160 | catkin_install_python(PROGRAMS 161 | nodes/saveMaptf.py 162 | nodes/broadcastMaptf.py 163 | DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} 164 | ) 165 | 166 | ## Mark executables for installation 167 | ## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_executables.html 168 | # install(TARGETS ${PROJECT_NAME}_node 169 | # RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} 170 | # ) 171 | 172 | ## Mark libraries for installation 173 | ## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_libraries.html 174 | # install(TARGETS ${PROJECT_NAME} 175 | # ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} 176 | # LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} 177 | # RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION} 178 | # ) 179 | 180 | ## Mark cpp header files for installation 181 | # install(DIRECTORY include/${PROJECT_NAME}/ 182 | # DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} 183 | # FILES_MATCHING PATTERN "*.h" 184 | # PATTERN ".svn" EXCLUDE 185 | # ) 186 | 187 | ## Mark other files for installation (e.g. launch and bag files, etc.) 188 | # install(FILES 189 | # # myfile1 190 | # # myfile2 191 | # DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION} 192 | # ) 193 | 194 | ############# 195 | ## Testing ## 196 | ############# 197 | 198 | ## Add gtest based cpp test target and link libraries 199 | # catkin_add_gtest(${PROJECT_NAME}-test test/test_map_constant_tf.cpp) 200 | # if(TARGET ${PROJECT_NAME}-test) 201 | # target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME}) 202 | # endif() 203 | 204 | ## Add folders to be run by python nosetests 205 | # catkin_add_nosetests(test) 206 | -------------------------------------------------------------------------------- /turtleRL_ws/src/map_constant_tf/nodes/broadcastMaptf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import roslib 3 | import rospy 4 | import sys 5 | 6 | import tf 7 | import geometry_msgs.msg 8 | import pickle 9 | 10 | if __name__ == '__main__': 11 | rospy.init_node('map_tf_broadcaster') 12 | 13 | myargv = rospy.myargv(argv=sys.argv) 14 | if len(myargv)!=2: 15 | rospy.logerr("Usage: broadcastMaptf.py map_tf_name") 16 | quit() 17 | file_name = myargv[1] 18 | file_name += ".pkl" 19 | 20 | with open(file_name, 'rb') as p_in: 21 | pose = pickle.load(p_in) 22 | rospy.loginfo(pose) 23 | br = tf.TransformBroadcaster() 24 | 25 | rate = rospy.Rate(30.0) 26 | while not rospy.is_shutdown(): 27 | rospy.loginfo(rospy.Time.now()) 28 | br.sendTransform(pose[0],pose[1], rospy.Time.now(), "odom", "map") 29 | rate.sleep() -------------------------------------------------------------------------------- /turtleRL_ws/src/map_constant_tf/nodes/saveMaptf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import roslib 3 | import rospy 4 | import sys 5 | 6 | import tf 7 | import geometry_msgs.msg 8 | import pickle 9 | 10 | if __name__ == '__main__': 11 | myargv = rospy.myargv(argv=sys.argv) 12 | if len(myargv)!=2: 13 | rospy.logerr("Usage: saveMaptf.py map_tf_filename") 14 | quit() 15 | file_name = myargv[1] 16 | file_name += ".pkl" 17 | rospy.init_node('turtle_tf_broadcaster') 18 | 19 | listener = tf.TransformListener() 20 | 21 | rate = rospy.Rate(1) 22 | while not rospy.is_shutdown(): 23 | try: 24 | pose = listener.lookupTransform('/odom', '/map', rospy.Time(0)) 25 | except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException): 26 | continue#!/usr/bin/env python 27 | with open(file_name, 'wb') as out: 28 | pickle.dump(pose, out, pickle.HIGHEST_PROTOCOL) 29 | rospy.loginfo_throttle(1,"Map transfromed saved, ctrl-c to exit") 30 | -------------------------------------------------------------------------------- /turtleRL_ws/src/map_constant_tf/package.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | map_constant_tf 4 | 0.0.0 5 | The map_constant_tf package 6 | 7 | 8 | 9 | 10 | havard 11 | 12 | 13 | 14 | 15 | 16 | TODO 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | catkin 52 | rospy 53 | rospy 54 | rospy 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | --------------------------------------------------------------------------------