├── .gitignore ├── README.md └── scripts ├── display_patch_dataset.py ├── extract_patch_dataset.py └── obfuscate_names_copy.py /.gitignore: -------------------------------------------------------------------------------- 1 | patch_dataset 2 | grasping_data 3 | grasping_data_small 4 | *.pyc 5 | *.tar.gz 6 | .tmp 7 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Home Grasping Dataset 2 | 3 | Tools for downloading and using the home grasping dataset in the [Robot Learning in Homes](http://papers.nips.cc/paper/8123-robot-learning-in-homes-improving-generalization-and-reducing-dataset-bias.pdf) paper. In this pruned dataset, there are 28,874 grasp attempts with a 18.6% grasp success rate. 4 | 5 | ## Download links 6 | There are three datasets you can work with from this repository: 7 | * [The full raw grasping data collected on our low cost robot (~30GB)](https://www.dropbox.com/s/njw0extmmon2yro/grasping_data.tar.gz) 8 | * [A smaller subsample of the raw grasping data collected on our low cost robot (~1.2GB)](https://www.dropbox.com/s/vzricn40z2n4la5/grasping_data_small.tar.gz) 9 | * [The patch dataset extracted from the full grasp dataset (~5.3GB)](https://www.dropbox.com/s/k5eogg3nuc5ybtv/patch_dataset.tar.gz) 10 | 11 | ## Data organization 12 | The raw grasping data is organized as follows: 13 | 14 | ```bash 15 | . 16 | ├── grasping_data 17 | │ ├── 18 | │ │ ├── 19 | │ │ │ ├── grasp_ 20 | │ │ │ │ ├── color.jpg 21 | │ │ │ │ ├── depth.jpg 22 | │ │ │ │ ├── data.p 23 | └── . . . . 24 | ``` 25 | 26 | `ROBOT_ID` runs from 1 to 5 and corresponds to the robot on which the data was collected. `GRASP_ENVIRONMENT` corresponds to a grasping run. All the grasps in a specific environment run are in the same home. `ATTEMPT_ID` corresponds to a specific grasp attempt in the `GRASP_ENVIRONMENT`. Each grasp contains the RGB `color.jpg` image, the Depth `depth.jpg` image, and a pickle file `data.p` that contains information about the grasp attempt (like success or failure). 27 | 28 | ## Getting started with the data 29 | To get started with this data, we will use the smaller subsample. To get the full data, change the `wget` download to https://www.dropbox.com/s/njw0extmmon2yro/grasping_data.tar.gz. 30 | 31 | ```bash 32 | cd ~ 33 | git clone https://github.com/lerrel/home_dataset.git 34 | cd home_dataset 35 | wget https://www.dropbox.com/s/vzricn40z2n4la5/grasping_data_small.tar.gz 36 | tar -xvzf grasping_data_small.tar.gz 37 | ``` 38 | 39 | To read the data, a few helper scripts are provided in `scripts`. To use them you will need to install appropriate python packages on `python 2.7`: 40 | 41 | ```bash 42 | sudo apt-get install python-numpy python-scipy 43 | pip install cloudpickle ipython==5 opencv-python temp argparse pytest-shutil 44 | ``` 45 | 46 | Run the following script to display and extract the patch dataset from the raw grasping data. 47 | 48 | ```bash 49 | python scripts/extract_patch_dataset.py --home_dataset_path ~/home_dataset/grasping_data_small --patch_dataset_path '/tmp/' --train_fraction 0.8 --display 1 --msec 1000 50 | ``` 51 | 52 | To download pre-extracted patch dataset: 53 | 54 | ```bash 55 | cd ~/home_dataset 56 | wget https://www.dropbox.com/s/k5eogg3nuc5ybtv/patch_dataset.tar.gz 57 | tar -xvzf Patch_Dataset.tar.gz 58 | ``` 59 | 60 | To visualize the patch dataset, run: 61 | 62 | ```bash 63 | python scripts/display_patch_dataset.py --patch_dataset_path ~/home_dataset/patch_dataset/Train --pos 1 --rand 0 --msec 1000 64 | ``` 65 | 66 | ## Acknowledgments 67 | 68 | The grasp data used is from the [Robot Learning in Homes](http://papers.nips.cc/paper/8123-robot-learning-in-homes-improving-generalization-and-reducing-dataset-bias.pdf) paper. 69 | 70 | ``` 71 | @inproceedings{gupta2018robot, 72 | title={Robot learning in homes: Improving generalization and reducing dataset bias}, 73 | author={Gupta, Abhinav and Murali, Adithyavairavan and Gandhi, Dhiraj Prakashchand and Pinto, Lerrel}, 74 | booktitle={Advances in Neural Information Processing Systems}, 75 | pages={9112--9122}, 76 | year={2018} 77 | } 78 | ``` 79 | -------------------------------------------------------------------------------- /scripts/display_patch_dataset.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Code to display the Patch dataset 3 | 4 | Dependencies: argparse ('pip install argparse') 5 | cv2 ('conda install -c menpo opencv=2.4.11' or install opencv from source) 6 | numpy ('pip install numpy' or 'conda install numpy') 7 | 8 | Template run: 9 | python display_patch_dataset.py --patch_dataset_path PATH_TO_PATCH_DATASET --pos {0, 1} --rand {0, 1} --msec{1, 2, 3, ...} 10 | Example run: 11 | python display_patch_dataset.py --patch_dataset_path ~/home_dataset/patch_dataset/Train --pos 1 --rand 0 --msec 1000 12 | ''' 13 | 14 | import os 15 | import argparse 16 | import cv2 17 | import numpy as np 18 | 19 | ## This function resizes the image to new_size X new_size and then draws a rectangle rotated by the grasp_angle 20 | def process_and_draw_rect(I, grasp_angle, new_size = 500): 21 | I_temp = cv2.resize(I, (new_size,new_size)) 22 | grasp_l = new_size/3.0 23 | grasp_w = new_size/6.0 24 | points = np.array([[-grasp_l, -grasp_w], 25 | [grasp_l, -grasp_w], 26 | [grasp_l, grasp_w], 27 | [-grasp_l, grasp_w]]) 28 | R = np.array([[np.cos(grasp_angle), -np.sin(grasp_angle)], 29 | [np.sin(grasp_angle), np.cos(grasp_angle)]]) 30 | rot_points = np.dot(R, points.transpose()).transpose() 31 | im_points = rot_points + new_size/2.0 32 | cv2.line(I_temp, tuple(im_points[0].astype(int)), tuple(im_points[1].astype(int)), color=(0,255,0), thickness=5) 33 | cv2.line(I_temp, tuple(im_points[1].astype(int)), tuple(im_points[2].astype(int)), color=(0,0,255), thickness=5) 34 | cv2.line(I_temp, tuple(im_points[2].astype(int)), tuple(im_points[3].astype(int)), color=(0,255,0), thickness=5) 35 | cv2.line(I_temp, tuple(im_points[3].astype(int)), tuple(im_points[0].astype(int)), color=(0,0,255), thickness=5) 36 | return I_temp 37 | 38 | parser = argparse.ArgumentParser() 39 | parser.add_argument('--patch_dataset_path', type=str, help='The patch dataset you want to display') 40 | parser.add_argument('--pos', type=int, default=1, help='0 if you want to display negative data; 1 if you want to display positive data') 41 | parser.add_argument('--rand', type=int, default=1, help='1 if you want to display is random sequence; 0 otherwise') 42 | parser.add_argument('--msec', type=int, default=500, help='milliseconds between display') 43 | 44 | ## Parse arguments 45 | args = parser.parse_args() 46 | dat = args.patch_dataset_path 47 | if args.pos==1: 48 | dat_typ='positive' 49 | else: 50 | dat_typ='negative' 51 | shuf = args.rand 52 | msec = args.msec 53 | 54 | ## Create paths 55 | dat_path = os.path.join(dat, dat_typ) 56 | images_path = os.path.join(dat_path, 'Images') 57 | info_path = os.path.join(dat_path, 'dataInfo.txt') 58 | 59 | with open(info_path) as f: 60 | content = f.readlines() 61 | del(content[0]) # since first line is a description 62 | 63 | proc_content = [] 64 | for x in content: 65 | x_l = x.strip().split(',') 66 | x_l[1] = float(x_l[1]) 67 | proc_content.append(x_l) 68 | 69 | if shuf==1: 70 | import random 71 | random.shuffle(proc_content) 72 | 73 | for d in proc_content: 74 | impath = os.path.join(images_path, d[0]) 75 | grasp_angle = d[1] 76 | I = cv2.imread(impath) 77 | I_grasp = process_and_draw_rect(I,grasp_angle) 78 | cv2.imshow('image',I_grasp) 79 | cv2.waitKey(msec) 80 | 81 | cv2.destroyAllWindows() 82 | -------------------------------------------------------------------------------- /scripts/extract_patch_dataset.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Code that extract patch dataset from the home grasping data collected in http://papers.nips.cc/paper/8123-robot-learning-in-homes-improving-generalization-and-reducing-dataset-bias.pdf 3 | 4 | Example run: 5 | python extract_patch_dataset.py --home_dataset_path ~/home_dataset/grasping_data --patch_dataset_path '/tmp/' --train_fraction 0.8 --display 1 --msec 1000 6 | ''' 7 | import cloudpickle as pkl 8 | import os 9 | from os.path import join, getsize 10 | from IPython import embed 11 | import cv2 12 | import numpy as np 13 | from copy import deepcopy as copy 14 | import tempfile 15 | import shutil 16 | import csv 17 | import scipy 18 | from scipy import misc 19 | import random 20 | import string 21 | import argparse 22 | 23 | HOME_DIR = os.environ['HOME'] 24 | DATA_ROOT_DIR = os.path.join(HOME_DIR,'home_dataset/grasping_data_small') 25 | GRASP_SUBNAME = 'grasp_' 26 | DATA_FILE = 'data.p' 27 | ESSENTIAL_KEYS = ['grasp_x_img', 'grasp_y_img', 'valid_grasp', 'grasp_theta', 'grasp_result'] 28 | VALID_KEY = 'valid_grasp' 29 | SUCCESS_KEY = 'grasp_result' 30 | IMAGE_FILE = 'color.jpg' 31 | DEPTH_FILE = 'depth.jpg' 32 | PATCH_SIZE = 150 33 | 34 | ## Balances the dataset such that each grasp environment has atleast MIN_SUCCESS_RATE rate of grasp success 35 | MIN_SUCCESS_RATE = 0.25 36 | # MIN_SUCCESS_RATE = 0.0 37 | 38 | class DataHandler(object): 39 | def __init__(self, train_root_dir, val_root_dir=None, train_rf=[''], train_ef=[''], val_rf=[''], val_ef=['']): 40 | self.train_grasp_dirs = np.array(self.parse_grasp_data(train_root_dir, train_rf, train_ef)) 41 | self.train_dataset = self.randomize_dataset(self.train_grasp_dirs) 42 | if val_root_dir != None: 43 | self.val_grasp_dirs = np.array(self.parse_grasp_data(val_root_dir, val_rf, val_ef)) 44 | self.val_dataset = self.randomize_dataset(self.val_grasp_dirs) 45 | 46 | def write_train_val(self, data_dir, **kwargs): 47 | self.write_dataset(self.train_dataset, os.path.join(data_dir, 'Train'), **kwargs) 48 | self.write_dataset(self.val_dataset, os.path.join(data_dir, 'Validation'), **kwargs) 49 | 50 | def random_val_split(self, dataset, train_fraction=0.8): 51 | dataset = copy(dataset) 52 | l = len(dataset) 53 | nt = int(l*train_fraction) 54 | perms = np.random.permutation(l) 55 | self.train_dataset = dataset[perms[:nt]] 56 | self.val_dataset = dataset[perms[nt:]] 57 | 58 | def write_dataset(self, dataset, data_dir, **kwargs): 59 | self.mkdir(data_dir) 60 | pos_data_dir = os.path.join(data_dir,'positive') 61 | neg_data_dir = os.path.join(data_dir,'negative') 62 | pos_image_dir = os.path.join(pos_data_dir,'Images') 63 | neg_image_dir = os.path.join(neg_data_dir,'Images') 64 | pos_data_file = os.path.join(pos_data_dir,'dataInfo.txt') 65 | neg_data_file = os.path.join(neg_data_dir,'dataInfo.txt') 66 | 67 | self.mkdir([pos_data_dir, neg_data_dir, pos_image_dir, neg_image_dir]) 68 | pos_csv_file = open(pos_data_file,'w') 69 | neg_csv_file = open(neg_data_file,'w') 70 | pos_csv_writer = csv.writer(pos_csv_file) 71 | neg_csv_writer = csv.writer(neg_csv_file) 72 | pos_csv_writer.writerow(['PatchFilePath','Theta','Path', 'GraspHeight', 'GraspWidth']) 73 | neg_csv_writer.writerow(['PatchFilePath','Theta','Path', 'GraspHeight', 'GraspWidth']) 74 | for idx,d in enumerate(dataset): 75 | print('Grasp #{} Grasp Folder: {} Success: {}'.format(idx, d[0], d[1]['grasp_result'])) 76 | grasp_datapoints = self.process_datapoint(d, **kwargs) 77 | if grasp_datapoints[0] is None: 78 | continue 79 | #grasp_datapoints = self.get_labeled_datapoints(grasp_datapoints) 80 | for gd in grasp_datapoints: 81 | success = gd['success'] 82 | theta_label = gd['theta'] 83 | image = gd['image'] 84 | im_name = gd['image_name'] 85 | h = gd['h'] 86 | w = gd['w'] 87 | if success: 88 | csv_writer = pos_csv_writer 89 | image_dir = pos_image_dir 90 | else: 91 | csv_writer = neg_csv_writer 92 | image_dir = neg_image_dir 93 | csv_writer.writerow([im_name, theta_label, '/'.join(d[0].split('/')[3:]), h, w]) 94 | cv2.imwrite(os.path.join(image_dir, im_name), image) 95 | 96 | def split_train_val(self): 97 | n_dataset = len(self.grasp_dirs) 98 | n_train = int(n_dataset*self.train_fr) 99 | n_val = n_dataset-n_train 100 | random_idxs = np.random.permutation(n_dataset) 101 | train_idxs = random_idxs[:n_train] 102 | val_idxs = random_idxs[n_train:] 103 | self.train_dataset = self.grasp_dirs[train_idxs] 104 | self.val_dataset = self.grasp_dirs[val_idxs] 105 | 106 | def randomize_dataset(self, dataset): 107 | n_dataset = len(dataset) 108 | random_idxs = np.random.permutation(n_dataset) 109 | return dataset[random_idxs] 110 | 111 | def parse_grasp_data(self, grasp_root_dir, robot_filters=[''], env_filters=[''], grasp_filters=[GRASP_SUBNAME]): 112 | grasp_dirs = [] 113 | env_stats = [] 114 | robot_ids = os.listdir(grasp_root_dir) 115 | robot_ids = self.filter_list(robot_ids, robot_filters) 116 | for robot_id in robot_ids: 117 | grasp_dir_robot = os.path.join(grasp_root_dir,robot_id) 118 | grasp_envs = os.listdir(grasp_dir_robot) 119 | grasp_envs = self.filter_list(grasp_envs, env_filters) 120 | for grasp_env in grasp_envs: 121 | grasp_dir_robot_env = os.path.join(grasp_dir_robot,grasp_env) 122 | print(grasp_dir_robot_env) 123 | grasp_attempts = os.listdir(grasp_dir_robot_env) 124 | grasp_attempts = self.filter_list(grasp_attempts, grasp_filters) 125 | ge_succ = 0 126 | ge_total = 0 127 | pos_attempts = [] 128 | neg_attempts = [] 129 | for grasp_attempt in grasp_attempts: 130 | grasp_dir_robot_env_attempt = os.path.join(grasp_dir_robot_env,grasp_attempt) 131 | grasp_data_file = os.path.join(grasp_dir_robot_env_attempt, DATA_FILE) 132 | if not os.path.isfile(grasp_data_file): 133 | continue 134 | try: 135 | grasp_data = pkl.load(open(grasp_data_file, 'rb')) 136 | except: 137 | print('Error in file: {}'.format(grasp_data_file)) 138 | continue 139 | contains_keys = self.check_keys(ESSENTIAL_KEYS, grasp_data.keys()) 140 | if not contains_keys: 141 | continue 142 | isvalid = grasp_data[VALID_KEY] 143 | if not isvalid: 144 | continue 145 | success = grasp_data[SUCCESS_KEY] 146 | if success: 147 | pos_attempts.append([grasp_dir_robot_env_attempt, grasp_data]) 148 | ge_succ += 1 149 | else: 150 | neg_attempts.append([grasp_dir_robot_env_attempt, grasp_data]) 151 | ge_total += 1 152 | g_success_rate = float(ge_succ)/(ge_total+0.00001) 153 | if g_success_rate>=MIN_SUCCESS_RATE: 154 | grasp_dirs = grasp_dirs+pos_attempts 155 | grasp_dirs = grasp_dirs+neg_attempts 156 | else: 157 | ge_total_old = ge_total 158 | ge_fail_pred = float(ge_succ)*(1-MIN_SUCCESS_RATE)/(MIN_SUCCESS_RATE) 159 | grasp_dirs = grasp_dirs+pos_attempts 160 | grasp_dirs = grasp_dirs+neg_attempts[:int(ge_fail_pred)] 161 | ge_fail = len(neg_attempts[:int(ge_fail_pred)]) 162 | ge_total = ge_succ+ge_fail 163 | print('Balancing...: ', grasp_dir_robot_env, ge_succ, ge_total_old-ge_succ, g_success_rate, ge_fail) 164 | env_stats.append([grasp_dir_robot_env, ge_succ, ge_total, float(ge_succ)/(ge_total+0.001)]) 165 | return grasp_dirs 166 | 167 | def filter_names(self, name, name_filters): 168 | contains_filter = False 169 | for f in name_filters: 170 | if f in name: 171 | contains_filter = True 172 | break 173 | return contains_filter 174 | 175 | def filter_list(self, name_list, name_filters): 176 | new_list = [] 177 | for l in name_list: 178 | for f in name_filters: 179 | if f in l: 180 | new_list.append(l) 181 | break 182 | return new_list 183 | 184 | def process_datapoint(self, datapoint, display=False, display_msec=500): 185 | img_path = os.path.join(datapoint[0], IMAGE_FILE) 186 | if os.path.isfile(img_path): 187 | datapoint = copy(datapoint) 188 | datapoint[1]['image'] = cv2.imread(img_path) 189 | datapoint[1]['data_path'] = datapoint[0] 190 | datapoint = datapoint[1] 191 | else: 192 | return [None] 193 | image = datapoint['image'] 194 | w, h = datapoint['grasp_x_img'], datapoint['grasp_y_img'] 195 | t = datapoint['grasp_theta'] 196 | s = datapoint['grasp_result'] 197 | if display==True: 198 | I = self.draw_rectangle(copy(image), h, w, t, PATCH_SIZE) 199 | cv2.imshow('image',I) 200 | cv2.waitKey(display_msec) 201 | grasp_datapoints = self.data_augment(image, s, h, w, t, PATCH_SIZE, 18) 202 | return grasp_datapoints 203 | 204 | def data_augment(self, image, suc, h, w, t, ps, na): 205 | aug_pts = [] 206 | for ang in range(na): 207 | rot_ang = np.pi*ang/18 208 | new_t = t+rot_ang 209 | new_image = self.rotate_image_and_extract_patch(image, -rot_ang, [h,w], ps) 210 | image_name = self.random_string(10)+'.jpg' 211 | aug_pts.append({'image':new_image, 'theta':new_t, 'success':suc, 'image_name':image_name, 'h':h, 'w':w}) 212 | return aug_pts 213 | 214 | def random_string(self, N): 215 | return ''.join(random.choice(string.ascii_uppercase) for _ in range(N)) 216 | 217 | def rotate_image_and_extract_patch(self, img, angle, center, size): 218 | angle = angle*180/np.pi 219 | padX = [img.shape[1] - center[1], center[1]] 220 | padY = [img.shape[0] - center[0], center[0]] 221 | imgP = np.pad(img, [padY, padX, [0,0]], 'constant') 222 | imgR = scipy.misc.imrotate(imgP, angle) 223 | #imgR = ndimage.rotate(imgP, angle, reshape=False, order =1) 224 | half_size = int(size/2) 225 | return imgR[padY[0] + center[0] - half_size: padY[0] + center[0] + half_size, padX[0] + center[1] - half_size : padX[0] + center[1] + half_size, :] 226 | 227 | def draw_rectangle(self, I, h, w, t, gsize=100): 228 | I_temp = I 229 | grasp_l = gsize/2.5 230 | grasp_w = gsize/5.0 231 | grasp_angle = t 232 | points = np.array([[-grasp_l, -grasp_w], 233 | [grasp_l, -grasp_w], 234 | [grasp_l, grasp_w], 235 | [-grasp_l, grasp_w]]) 236 | R = np.array([[np.cos(grasp_angle), -np.sin(grasp_angle)], 237 | [np.sin(grasp_angle), np.cos(grasp_angle)]]) 238 | rot_points = np.dot(R, points.transpose()).transpose() 239 | im_points = rot_points + np.array([w,h]) 240 | cv2.line(I_temp, tuple(im_points[0].astype(int)), tuple(im_points[1].astype(int)), color=(0,255,0), thickness=5) 241 | cv2.line(I_temp, tuple(im_points[1].astype(int)), tuple(im_points[2].astype(int)), color=(0,0,255), thickness=5) 242 | cv2.line(I_temp, tuple(im_points[2].astype(int)), tuple(im_points[3].astype(int)), color=(0,255,0), thickness=5) 243 | cv2.line(I_temp, tuple(im_points[3].astype(int)), tuple(im_points[0].astype(int)), color=(0,0,255), thickness=5) 244 | return I_temp 245 | 246 | def check_keys(self, e_keys, g_keys): 247 | all_present = True 248 | for key in e_keys: 249 | if key not in g_keys: 250 | all_present = False 251 | break 252 | return all_present 253 | 254 | def load_dataset(self, grasp_dirs): 255 | dataset = [] 256 | for gd in grasp_dirs: 257 | img_path = os.path.join(gd[0], IMAGE_FILE) 258 | depth_path = os.path.join(gd[0], DEPTH_FILE) 259 | if os.path.isfile(img_path) and os.path.isfile(depth_path): 260 | datapoint = copy(gd) 261 | datapoint[1]['image'] = cv2.imread(img_path) 262 | datapoint[1]['depth'] = cv2.imread(depth_path) 263 | datapoint[1]['data_path'] = datapoint[0] 264 | dataset.append(datapoint[1]) 265 | return dataset 266 | 267 | def mkdir(self, directories): 268 | if not hasattr(directories, '__iter__'): 269 | directories = [directories] 270 | for directory in directories: 271 | if os.path.exists(directory): 272 | tmp = tempfile.mktemp(dir=os.path.dirname(directory)) 273 | shutil.move(directory, tmp) 274 | shutil.rmtree(tmp) 275 | os.makedirs(directory) 276 | 277 | def main(): 278 | parser = argparse.ArgumentParser() 279 | parser.add_argument('--home_dataset_path', type=str, default=DATA_ROOT_DIR, help='Path to home grasping data') 280 | parser.add_argument('--patch_dataset_path', type=str, default='.tmp', help='Path to patch dataset') 281 | parser.add_argument('--train_fraction', type=float, default=0.8, help='ratio of training data') 282 | parser.add_argument('--display', type=int, default=1, help='1 if you want to display is random sequence; 0 otherwise') 283 | parser.add_argument('--msec', type=int, default=500, help='milliseconds between display') 284 | 285 | ## Parse arguments 286 | args = parser.parse_args() 287 | print('\n\n#######################\n## Loading Home Data ##\n#######################\n') 288 | D = DataHandler(args.home_dataset_path) 289 | D.random_val_split(D.train_dataset, args.train_fraction) 290 | print('\n\n###########################\n## Writing Patch Dataset ##\n###########################\n') 291 | D.write_train_val(args.patch_dataset_path, display=bool(args.display), display_msec=args.msec) 292 | 293 | if __name__=='__main__': 294 | main() -------------------------------------------------------------------------------- /scripts/obfuscate_names_copy.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | from IPython import embed 4 | import tempfile 5 | 6 | HOME_DIR = os.environ['HOME'] 7 | CUR_DATA_DIR = os.path.join(HOME_DIR,'Dropbox/Apps/LowCostArmDataUploader/grasping_data') 8 | NEW_DATA_DIR = os.path.join(HOME_DIR,'home_dataset/grasping_data') 9 | 10 | def copyDirectory(src, dest): 11 | try: 12 | shutil.copytree(src, dest) 13 | print(dest) 14 | # Directories are the same 15 | except shutil.Error as e: 16 | print('Directory not copied. Error: %s' % e) 17 | # Any error saying that the directory doesn't exist 18 | except OSError as e: 19 | print('Directory not copied. Error: %s' % e) 20 | 21 | def mkdir(directories): 22 | if not hasattr(directories, '__iter__'): 23 | directories = [directories] 24 | for directory in directories: 25 | if os.path.exists(directory): 26 | tmp = tempfile.mktemp(dir=os.path.dirname(directory)) 27 | shutil.move(directory, tmp) 28 | shutil.rmtree(tmp) 29 | os.makedirs(directory) 30 | 31 | 32 | mkdir(NEW_DATA_DIR) 33 | robot_ids = os.listdir(CUR_DATA_DIR) 34 | for robot_id in robot_ids: 35 | grasp_dir_robot = os.path.join(CUR_DATA_DIR,robot_id) 36 | new_grasp_dir_robot = os.path.join(NEW_DATA_DIR,robot_id) 37 | mkdir(new_grasp_dir_robot) 38 | grasp_envs = os.listdir(grasp_dir_robot) 39 | for grasp_env in grasp_envs: 40 | grasp_dir_robot_env = os.path.join(grasp_dir_robot,grasp_env) 41 | key_words = grasp_env.split('_') 42 | if len(key_words)<2: 43 | continue 44 | del(key_words[1]) 45 | new_grasp_env = '_'.join(key_words) 46 | new_grasp_dir_robot_env = os.path.join(new_grasp_dir_robot,new_grasp_env) 47 | copyDirectory(grasp_dir_robot_env, new_grasp_dir_robot_env) 48 | # embed() --------------------------------------------------------------------------------