├── .gitignore ├── README.md ├── blender ├── cycles_renderer.blend └── eevee_renderer.blend ├── docs_img ├── cycles.png └── eevee.png ├── render_dataset.py └── render_shapenet.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.png 2 | shapenet_rendered/ 3 | !docs_img/* 4 | *.json 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Render ShapeNet Dataset 2 | This script will help you render ShapeNet and custom datasets that follow ShapeNet conventions. 3 | 4 | ## Prerequisites 5 | - Python 3.7 or higher 6 | - Blender 2.9 or higher 7 | - OSX, Linux or Windows running WSL2 8 | 9 | ## Setup 10 | - Download the ShapeNet V1 or V2 dataset following the [official link](https://shapenet.org/) 11 | - Make a new folder called shapenet in this directory, and unzip the downloaded file: `mkdir shapenet && unzip SHAPENET_SYNSET_ID.zip -d shapenet` 12 | - Download Blender following the [official link](https://www.blender.org/) 13 | 14 | ## Installing Required Libraries 15 | You will need the following libraries on Linux: 16 | ``` 17 | apt-get install -y libxi6 libgconf-2-4 libfontconfig1 libxrender1 18 | ``` 19 | 20 | Blender ships with its own distribution of Python, which you will need to add some libraries to: 21 | ```bash 22 | cd BLENDER_PATH/2.90/python/bin 23 | ./python3.7m -m ensurepip 24 | ./python3.7m -m pip install numpy 25 | ``` 26 | 27 | ## Example 28 | ```py 29 | from render_dataset import BlenderDataset 30 | import cv2 31 | ds = BlenderDataset("shapenet/cars/dataset_list.json", resolution=2048, shapenet_version="1") 32 | img, condinfo, mask = ds[28] 33 | img = img.transpose(1, 2, 0).astype(np.uint8) 34 | cv2.imwrite("img.png", img) 35 | ``` 36 | 37 | ## Data 38 | The rendering script looks for datasets in the dataset_list.json file. You can modify this to add your own files and paths or point to your own JSON dataset list using the `--dataset_list ` flag when invoking `render_all.py` 39 | 40 | - For **ShapeNetCore.v1**, you don't need to do any preprocesing. If you are using your own dataset, you should make sure that your models are sorted into directories with a "model.obj" in them, following the expected conventions of ShapeNetCore.v1. 41 | 42 | - For **ShapeNetCore.v2**, make sure to pass the `--shapenet_version 2` flag to the `render_all.py` script -- this will destructively normalize your dataset folder to match the expected structure of ShapeNetCore.v1, while retaining the original .obj and .mtl file names 43 | 44 | ## Rendering 45 | 46 | ### Quick Start 47 | Once you've modified dataset_list.json or added the ShapeNet data that reflects the source training data paths, you can render your data like this: 48 | ```bash 49 | python render_all.py 50 | ``` 51 | 52 | **Note:** The code will save the output from blender to `tmp.out`, this is not necessary for training, and can be removed by `rm -rf tmp.out` 53 | 54 | ## Additional Flags 55 | You can customize the rendering script by adding flags. 56 | 57 | ### Switch to Eevee for dramatically faster rendering speed 58 | By default, the Blender renderer uses Cycles, which has a photorealistic look but is slow (>10s/frame). You can also use Eevee, which may be more game-like in look but renders much much faster (<.3s/frame), and may be suitable for extracting a high quality dataset on lower end machines in a reasonable amount of time. 59 | ``` 60 | python render_all.py --engine EEVEE 61 | ``` 62 | 63 | ### Render ShapeNet V2 64 | For ShapeNetCore.v2 you will need to pass a flag to the render script to pre-process your data: 65 | ```bash 66 | python render_all.py --shapenet_version 2 67 | ``` 68 | 69 | ### Log to Console 70 | By default, the script will log to a tmp.out file (quiet mode), but you can override this: 71 | ``` 72 | python render_all.py --quiet_mode 0 73 | 74 | ``` 75 | ### Set Number of Views to Capture 76 | The default for the rendering script is to capture 24 views per object. However, many NeRF pipeline recommend closer to 100 images. Especially if you are working with a limited but high quality dataset, you should consider increasing the total number of views 2-4x 77 | ``` 78 | python render_all.py --num_views 96 79 | ``` 80 | 81 | ### Override Arguments 82 | By default, the rendering script will save outputs to "shapenet_rendered", read all datasets from dataset_list.json and use the default Blender installation in your system. However, you can override these arguments: 83 | ```bash 84 | python render_all.py --save_folder PATH_TO_SAVE_IMAGE --dataset_list PATH_TO_DATASET_JSON --blender_root PATH_TO_BLENDER 85 | ``` 86 | 87 | ## Modifying the Render Scene 88 | You can open the base scenes (located in the blender directory) and modify the lighting. There are no objects in the scene, so you will need to import a test object. Just be careful to remove any scene objects before you save. 89 | 90 | ## Comparison Between Cycles and Eevee 91 | Cycles is on the **Left**, Eevee is on the **Right** 92 |
93 | drawing drawing 94 | 95 | To Render Eevee headlessly 96 | ``` 97 | !apt-get install python-opengl -y 98 | !apt install xvfb -y 99 | !pip install pyvirtualdisplay 100 | !pip install piglet 101 | python3 render_parallel.py --num_views 96 --engine EEVEE --headless 102 | ``` 103 | 104 | ## Attribution 105 | 106 | - This code is adopted from this [GitHub repo](https://github.com/panmari/stanford-shapenet-renderer), we thank the author for sharing the codes! 107 | 108 | - The tome in the rendering comparison images was borrowed with permission from the [Loot Assets](https://github.com/webaverse/loot-assets) library. 109 | 110 | - [Original](https://github.com/lalalune/ImprovedShapenetRenderer) by [lalalune](https://github.com/lalalune). 111 | -------------------------------------------------------------------------------- /blender/cycles_renderer.blend: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neverix/BlenderDataset/76ba55e1b3ead9157a8776d8af0937e3f1f4c2e9/blender/cycles_renderer.blend -------------------------------------------------------------------------------- /blender/eevee_renderer.blend: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neverix/BlenderDataset/76ba55e1b3ead9157a8776d8af0937e3f1f4c2e9/blender/eevee_renderer.blend -------------------------------------------------------------------------------- /docs_img/cycles.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neverix/BlenderDataset/76ba55e1b3ead9157a8776d8af0937e3f1f4c2e9/docs_img/cycles.png -------------------------------------------------------------------------------- /docs_img/eevee.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neverix/BlenderDataset/76ba55e1b3ead9157a8776d8af0937e3f1f4c2e9/docs_img/eevee.png -------------------------------------------------------------------------------- /render_dataset.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 2 | # 3 | # NVIDIA CORPORATION & AFFILIATES and its licensors retain all intellectual property 4 | # and proprietary rights in and to this software, related documentation 5 | # and any modifications thereto. Any use, reproduction, disclosure or 6 | # distribution of this software and related documentation without an express 7 | # license agreement from NVIDIA CORPORATION & AFFILIATES is strictly prohibited. 8 | 9 | import os 10 | import argparse 11 | import json 12 | import time 13 | import subprocess 14 | import tempfile 15 | import cv2 16 | import numpy as np 17 | import torch 18 | 19 | 20 | class BlenderDataset(torch.utils.data.Dataset): 21 | def __init__(self, 22 | dataset_list="./dataset_list.json", 23 | resolution=None, # Ensure specific resolution, None = highest available. 24 | blender_root="blender", 25 | shapenet_version="3", 26 | engine="EEVEE", 27 | quiet_mode=False, 28 | headless=False, 29 | camera_root=None, 30 | data_camera_mode="shapenet_car", 31 | save_folder=None, 32 | model_name="model.obj" 33 | ): 34 | super().__init__() 35 | self.engine = engine 36 | self.headless = headless 37 | self.quiet_mode = quiet_mode 38 | self.save_folder = save_folder 39 | self.model_name = model_name 40 | self.dataset_list = dataset_list 41 | self.blender_root = blender_root 42 | self.shapenet_version = shapenet_version 43 | self.num_views = 1 # TODO? 44 | self.camera_root = camera_root 45 | self.data_camera_mode = data_camera_mode 46 | 47 | if shapenet_version == "3": # no model directory, I made this up 48 | model_name = None 49 | 50 | if self.save_folder is None: 51 | self.temp = tempfile.TemporaryDirectory() 52 | self.save_folder = self.temp.name 53 | 54 | if self.camera_root is None: 55 | self.camera_root = os.path.join(self.save_folder, "camera") 56 | 57 | if self.headless and self.engine == "EEVEE": 58 | from pyvirtualdisplay import Display 59 | Display().start() 60 | 61 | # check if dataset_list exists, throw error if not 62 | if not os.path.exists(dataset_list): 63 | raise ValueError("dataset_list does not exist!") 64 | 65 | scale_list = [] 66 | path_list = [] 67 | 68 | # read and parse json file at dataset_list.json 69 | with open(dataset_list, "r") as f: 70 | dataset = json.load(f) 71 | 72 | for entry in dataset: 73 | scale_list.append(entry["scale"]) 74 | path_list.append(entry["directory"]) 75 | 76 | # for shapenet v2, we normalize the model location 77 | if shapenet_version == '2': 78 | for obj_scale, dataset_folder in zip(scale_list, path_list): 79 | file_list = sorted(os.listdir(os.path.join(dataset_folder))) 80 | for file in file_list: 81 | # check if file_list+'/models' exists 82 | if os.path.exists(os.path.join(dataset_folder, file, 'models')): 83 | # move all files in file_list+'/models' to file_list 84 | os.system('mv ' + os.path.join(dataset_folder, file, 'models/*') + ' ' + os.path.join(dataset_folder, file)) 85 | # remove file_list+'/models' if it exists 86 | os.system('rm -rf ' + os.path.join(dataset_folder, file, 'models')) 87 | material_file = os.path.join(dataset_folder, file, 'model_normalized.mtl') 88 | # read material_file as a text file, replace any instance of '../images' with './images' 89 | with open(material_file, 'r') as f: 90 | material_file_text = f.read() 91 | material_file_text = material_file_text.replace('../images', './images') 92 | # write the modified text to material_file 93 | with open(material_file, 'w') as f: 94 | f.write(material_file_text) 95 | 96 | self.suffix = "" 97 | if self.quiet_mode: 98 | self.suffix = " >> tmp.out" 99 | 100 | self.files = [] 101 | self.scales = [] 102 | for obj_scale, dataset_folder in zip(scale_list, path_list): 103 | file_list = [ 104 | os.path.join(dataset_folder, name, *((model_name,) * int(model_name is not None))) 105 | for name in sorted(os.listdir(os.path.join(dataset_folder)))] 106 | self.files += file_list 107 | self.scales += [obj_scale for _ in file_list] 108 | 109 | 110 | self.img_size = resolution 111 | 112 | def __len__(self): 113 | return len(self.files) 114 | 115 | def __getitem__(self, idx): 116 | path, scale = self.files[idx], self.scales[idx] 117 | render_cmd = "%s -b -P render_shapenet.py -- --output %s %s --scale %f --views %s --engine %s%s" % ( 118 | self.blender_root, self.save_folder, path, scale, self.num_views, self.engine, self.suffix 119 | ) 120 | out = subprocess.check_output(render_cmd, shell=True).decode("utf-8") 121 | if "Saved: '" not in out: 122 | return None 123 | fname = out.rpartition("Saved: '")[-1].partition("'\n")[0] 124 | 125 | if self.data_camera_mode == 'shapenet_car' or self.data_camera_mode == 'shapenet_chair' \ 126 | or self.data_camera_mode == 'renderpeople' \ 127 | or self.data_camera_mode == 'shapenet_motorbike' or self.data_camera_mode == 'ts_house' or self.data_camera_mode == 'ts_animal' \ 128 | : 129 | ori_img = cv2.imread(fname, cv2.IMREAD_UNCHANGED) 130 | img = ori_img[:, :, :3][..., ::-1] 131 | mask = ori_img[:, :, 3:4] 132 | condinfo = np.zeros(2) 133 | fname_list = fname.split('/') 134 | img_idx = int(fname_list[-1].split('.')[0]) 135 | obj_idx = fname_list[-2] 136 | syn_idx = fname_list[-3] 137 | 138 | if self.data_camera_mode == 'shapenet_car' or self.data_camera_mode == 'shapenet_chair' \ 139 | or self.data_camera_mode == 'renderpeople' or self.data_camera_mode == 'shapenet_motorbike' \ 140 | or self.data_camera_mode == 'ts_house' or self.data_camera_mode == 'ts_animal': 141 | if not os.path.exists(os.path.join(self.camera_root, syn_idx, obj_idx, 'rotation.npy')): 142 | print('==> not found camera root') 143 | else: 144 | rotation_camera = np.load(os.path.join(self.camera_root, syn_idx, obj_idx, 'rotation.npy')) 145 | elevation_camera = np.load(os.path.join(self.camera_root, syn_idx, obj_idx, 'elevation.npy')) 146 | condinfo[0] = rotation_camera[img_idx] / 180 * np.pi 147 | condinfo[1] = (90 - elevation_camera[img_idx]) / 180.0 * np.pi 148 | else: 149 | raise NotImplementedError 150 | 151 | if self.img_size is not None: 152 | resize_img = cv2.resize(img, (self.img_size, self.img_size), interpolation=cv2.INTER_LINEAR) 153 | else: 154 | resize_img = img 155 | if not mask is None: 156 | mask = cv2.resize(mask, resize_img.shape[:2], interpolation=cv2.INTER_NEAREST) ######## 157 | else: 158 | mask = np.ones(1) 159 | img = resize_img.transpose(2, 0, 1) 160 | background = np.zeros_like(img) 161 | img = img * (mask > 0).astype(np.float) + background * (1 - (mask > 0).astype(np.float)) 162 | return np.ascontiguousarray(img), condinfo, np.ascontiguousarray(mask) 163 | 164 | 165 | if __name__ == "__main__": # Test 166 | ds = BlenderDataset() 167 | img, condinfo, mask = ds[28] 168 | img = img.transpose(1, 2, 0).astype(np.uint8) 169 | cv2.imwrite("img.png", img) 170 | 171 | -------------------------------------------------------------------------------- /render_shapenet.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 2 | # 3 | # NVIDIA CORPORATION & AFFILIATES and its licensors retain all intellectual property 4 | # and proprietary rights in and to this software, related documentation 5 | # and any modifications thereto. Any use, reproduction, disclosure or 6 | # distribution of this software and related documentation without an express 7 | # license agreement from NVIDIA CORPORATION & AFFILIATES is strictly prohibited. 8 | 9 | import argparse, sys, os, math, re 10 | import bpy 11 | from mathutils import Vector 12 | import numpy as np 13 | 14 | parser = argparse.ArgumentParser(description='Renders given obj file by rotation a camera around it.') 15 | parser.add_argument( 16 | '--views', type=int, default=24, 17 | help='number of views to be rendered') 18 | parser.add_argument( 19 | 'obj', type=str, 20 | help='Path to the obj file to be rendered.') 21 | parser.add_argument( 22 | '--output_folder', type=str, default='/tmp', 23 | help='The path the output will be dumped to.') 24 | parser.add_argument( 25 | '--scale', type=float, default=1, 26 | help='Scaling factor applied to model. Depends on size of mesh.') 27 | parser.add_argument( 28 | '--format', type=str, default='PNG', 29 | help='Format of files generated. Either PNG or OPEN_EXR') 30 | parser.add_argument( 31 | '--engine', type=str, default='CYCLES', 32 | help='Blender internal engine for rendering. either CYCLES or EEVEE, ...') 33 | parser.add_argument( 34 | '--gpu', type=int, default=0, 35 | help='gpu.') 36 | 37 | argv = sys.argv[sys.argv.index("--") + 1:] 38 | args = parser.parse_args(argv) 39 | 40 | if args.engine == 'CYCLES': 41 | bpy.ops.wm.open_mainfile(filepath=os.path.abspath("./blender/cycles_renderer.blend")) 42 | else: 43 | bpy.ops.wm.open_mainfile(filepath=os.path.abspath("./blender/eevee_renderer.blend")) 44 | 45 | # Set up rendering 46 | context = bpy.context 47 | scene = bpy.context.scene 48 | render = bpy.context.scene.render 49 | 50 | def enable_cuda_devices(): 51 | prefs = bpy.context.preferences 52 | cprefs = prefs.addons['cycles'].preferences 53 | cprefs.get_devices() 54 | 55 | # Attempt to set GPU device types if available 56 | for compute_device_type in ('CUDA', 'OPENCL', 'NONE'): 57 | try: 58 | cprefs.compute_device_type = compute_device_type 59 | print("Compute device selected: {0}".format(compute_device_type)) 60 | break 61 | except TypeError: 62 | pass 63 | 64 | # Any CUDA/OPENCL devices? 65 | acceleratedTypes = ['CUDA', 'OPENCL'] 66 | accelerated = any(device.type in acceleratedTypes for device in cprefs.devices) 67 | print('Accelerated render = {0}'.format(accelerated)) 68 | 69 | # If we have CUDA/OPENCL devices, enable only them, otherwise enable 70 | # all devices (assumed to be CPU) 71 | print(cprefs.devices) 72 | for idx, device in enumerate(cprefs.devices): 73 | device.use = (not accelerated or device.type in acceleratedTypes) and idx == args.gpu 74 | print('Device enabled ({type}) = {enabled}'.format(type=device.type, enabled=device.use)) 75 | 76 | return accelerated 77 | 78 | enable_cuda_devices() 79 | 80 | def bounds(obj, local=False): 81 | local_coords = obj.bound_box[:] 82 | om = obj.matrix_world 83 | 84 | if not local: 85 | worldify = lambda p: om @ Vector(p[:]) 86 | coords = [worldify(p).to_tuple() for p in local_coords] 87 | else: 88 | coords = [p[:] for p in local_coords] 89 | 90 | rotated = zip(*coords[::-1]) 91 | 92 | push_axis = [] 93 | for (axis, _list) in zip('xyz', rotated): 94 | info = lambda: None 95 | info.max = max(_list) 96 | info.min = min(_list) 97 | info.distance = info.max - info.min 98 | push_axis.append(info) 99 | 100 | import collections 101 | 102 | originals = dict(zip(['x', 'y', 'z'], push_axis)) 103 | 104 | o_details = collections.namedtuple('object_details', 'x y z') 105 | return o_details(**originals) 106 | 107 | 108 | imported_object = bpy.ops.import_scene.obj(filepath=args.obj, use_edges=False, use_smooth_groups=False, split_mode='OFF') 109 | 110 | for this_obj in bpy.data.objects: 111 | if this_obj.type == "MESH": 112 | this_obj.select_set(True) 113 | bpy.context.view_layer.objects.active = this_obj 114 | bpy.ops.object.mode_set(mode='EDIT') 115 | bpy.ops.mesh.split_normals() 116 | 117 | bpy.ops.object.mode_set(mode='OBJECT') 118 | print(len(bpy.context.selected_objects)) 119 | obj = bpy.context.selected_objects[0] 120 | context.view_layer.objects.active = obj 121 | 122 | mesh_obj = obj 123 | scale = args.scale 124 | factor = max(mesh_obj.dimensions[0], mesh_obj.dimensions[1], mesh_obj.dimensions[2]) / scale 125 | print('size of object:') 126 | print(mesh_obj.dimensions) 127 | print(factor) 128 | object_details = bounds(mesh_obj) 129 | print( 130 | object_details.x.min, object_details.x.max, 131 | object_details.y.min, object_details.y.max, 132 | object_details.z.min, object_details.z.max, 133 | ) 134 | print(bounds(mesh_obj)) 135 | mesh_obj.scale[0] /= factor 136 | mesh_obj.scale[1] /= factor 137 | mesh_obj.scale[2] /= factor 138 | bpy.ops.object.transform_apply(scale=True) 139 | 140 | # Get reference to camera and empty (rotation pivot) 141 | cam = scene.objects['Camera'] 142 | cam_empty = scene.objects['Empty'] 143 | 144 | stepsize = 360.0 / args.views 145 | rotation_mode = 'XYZ' 146 | 147 | model_identifier = os.path.split(os.path.split(args.obj)[0])[1] 148 | print('model identifier: ' + model_identifier) 149 | synset_idx = args.obj.split('/')[-3] 150 | print('synset idx: ' + synset_idx) 151 | 152 | img_folder = os.path.join(os.path.abspath(args.output_folder), 'img', synset_idx, model_identifier) 153 | camera_folder = os.path.join(os.path.abspath(args.output_folder), 'camera', synset_idx, model_identifier) 154 | 155 | os.makedirs(img_folder, exist_ok=True) 156 | os.makedirs(camera_folder, exist_ok=True) 157 | 158 | rotation_angle_list = np.random.rand(args.views) 159 | elevation_angle_list = np.random.rand(args.views) 160 | rotation_angle_list = rotation_angle_list * 360 161 | elevation_angle_list = elevation_angle_list * 30 162 | np.save(os.path.join(camera_folder, 'rotation'), rotation_angle_list) 163 | np.save(os.path.join(camera_folder, 'elevation'), elevation_angle_list) 164 | 165 | for i in range(0, args.views): 166 | cam_empty.rotation_euler[2] = math.radians(rotation_angle_list[i]) 167 | cam_empty.rotation_euler[0] = math.radians(elevation_angle_list[i]) 168 | 169 | print("Rotation {}, {}".format((stepsize * i), math.radians(stepsize * i))) 170 | render_file_path = os.path.join(img_folder, '%03d.png' % (i)) 171 | scene.render.filepath = render_file_path 172 | bpy.ops.render.render(write_still=True) 173 | --------------------------------------------------------------------------------