├── .gitignore
├── CITATION.cff
├── LICENSE
├── README.md
├── __init__.py
├── blender_manifest.toml
├── blender_nerf_operator.py
├── blender_nerf_ui.py
├── cos_operator.py
├── cos_ui.py
├── helper.py
├── sof_operator.py
├── sof_ui.py
├── ttc_operator.py
└── ttc_ui.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | __pycache__
--------------------------------------------------------------------------------
/CITATION.cff:
--------------------------------------------------------------------------------
1 | cff-version: 1.2.0
2 | message: "If you use this software in your work, please consider citing it as below."
3 | authors:
4 | - family-names: Raafat
5 | given-names: Maxime
6 | orcid: https://orcid.org/0000-0000-0000-0000
7 | title: BlenderNeRF
8 | version: 6.0.0
9 | doi: 10.5281/zenodo.13250725
10 | date-released: 2024-08-06
11 | url: "https://github.com/maximeraafat/BlenderNeRF"
12 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 Maxime Raafat
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # BlenderNeRF
2 |
3 | Whether a VFX artist, a research fellow or a graphics amateur, **BlenderNeRF** is the easiest and fastest way to create synthetic NeRF and Gaussian Splatting datasets within Blender. Obtain renders and camera parameters with a single click, while having full user control over the 3D scene and camera!
4 |
5 |
6 |
7 |
8 | Are you ready to NeRF? Start with a single click in Blender by checking out this tutorial!
9 |
10 |
11 |
12 | ## Neural Radiance Fields
13 |
14 | **Neural Radiance Fields ([NeRF](https://www.matthewtancik.com/nerf))** aim at representing a 3D scene as a view dependent volumetric object from 2D images only, alongside their respective camera information. The 3D scene is reverse engineered from the training images with help of a simple neural network.
15 |
16 | [**Gaussian Splatting**](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/) is a follow-up method for rendering radiance fields in a point-based manner. This representation is highly optimised for GPU rendering and leverages more traditional graphics techniques to achieve high frame rates.
17 |
18 | I recommend watching [this YouTube video](https://www.youtube.com/watch?v=YX5AoaWrowY) by **Corridor Crew** for a thrilling investigation on a few use cases and future potential applications of NeRFs.
19 |
20 |
21 | ## Motivation
22 |
23 | Rendering is an expensive computation. Photorealistic scenes can take seconds to hours to render depending on the scene complexity, hardware and available software resources.
24 |
25 | NeRFs and Gaussian splats can speed up this process, but require camera information typically extracted via cumbersome code. This plugin enables anyone to get renders and cameras with a single click in Blender.
26 |
27 |
28 |
29 |
30 |
31 |
32 | ## Installation
33 |
34 | 1. Download this repository as a **ZIP** file
35 | 2. Open Blender (4.0.0 or above)
36 | 3. In Blender, head to **Edit > Preferences > Add-ons**, and select **Install From Disk** under the drop icon
37 | 4. Select the downloaded **ZIP** file
38 |
39 | Although release versions of **BlenderNeRF** are available for download, they are primarily intended for tracking major code changes and for citation purposes. I recommend downloading the current repository directly, since minor changes or bug fixes might not be included in a release right away.
40 |
41 |
42 | ## Setting
43 |
44 | **BlenderNeRF** consists of 3 methods discussed in the sub-sections below. Each method is capable of creating **training** data and **testing** data for NeRF in the form of training images and a `transforms_train.json` respectively `transforms_test.json` file with the corresponding camera information. The data is archived into a single **ZIP** file containing training and testing folders. Training data can then be used by a NeRF model to learn the 3D scene representation. Once trained, the model may be evaluated (or tested) on the testing data (camera information only) to obtain novel renders.
45 |
46 | ### Subset of Frames
47 |
48 | **Subset of Frames (SOF)** renders every **N** frames from a camera animation, and utilises the rendered subset of frames as NeRF training data. The registered testing data spans over all frames of the same camera animation, including training frames. When trained, the NeRF model can render the full camera animation and is consequently well suited for interpolating or rendering large animations of static scenes.
49 |
50 |
51 |
52 |
53 |
54 | ### Train and Test Cameras
55 |
56 | **Train and Test Cameras (TTC)** registers training and testing data from two separate user defined cameras. A NeRF model can then be fitted with the data extracted from the training camera, and be evaluated on the testing data.
57 |
58 |
59 |
60 |
61 |
62 | ### Camera on Sphere
63 |
64 | **Camera on Sphere (COS)** renders training frames by uniformly sampling random camera views directed at the center from a user controlled sphere. Testing data is extracted from a selected camera.
65 |
66 |
67 |
68 |
69 |
70 |
71 | ## How to use the Methods
72 |
73 | The add-on properties panel is available under `3D View > N panel > BlenderNeRF` (the **N panel** is accessible under the 3D viewport when pressing `N`). All 3 methods (**SOF**, **TTC** and **COS**) share a common tab called `BlenderNeRF shared UI` with the below listed controllable properties.
74 |
75 | * `Train` (activated by default) : whether to register training data (renderings + camera information)
76 | * `Test` (activated by default) : whether to register testing data (camera information only)
77 | * `AABB` (by default set to **4**) : aabb scale parameter as described in Instant NGP (more details below)
78 | * `Render Frames` (activated by default) : whether to render the frames
79 | * `Save Log File` (deactivated by default) : whether to save a log file containing reproducibility information on the **BlenderNeRF** run
80 | * `File Format` (**NGP** by default) : whether to export the camera files in the Instant NGP or defaut NeRF file format convention
81 | * `Gaussian Points` (deactivated by default) : whether to export a `points3d.ply` file for Gaussian Splatting
82 | * `Gaussian Test Camera Poses` (**Dummy** by default): whether to export a dummy test camera file or the full set of test camera poses (only with `Gaussian Points`)
83 | * `Save Path` (empty by default) : path to the output directory in which the dataset will be created
84 |
85 | If the `Gaussian Points` property is active, **BlenderNeRF** will create an additional `points3d.ply` file from all visible meshes (at render time) where each vertex will be used as initialization point. Vertex colors will be stored if available, and set to black otherwise.
86 |
87 | The [**Gaussian Splatting**](https://github.com/graphdeco-inria/gaussian-splatting) repository natively supports **NeRF** datasets, but requires both train and test data. The `Dummy` option for the `Gaussian Test Camera Poses` property creates an empty test camera pose file, in the case no test images are needed. The `Full` option exports the default test camera poses, but will require separately rendering a `test` folder containing all the test renders.
88 |
89 | `AABB` is restricted to be an integer power of 2, it defines the side length of the bounding box volume in which NeRF will trace rays. The property was introduced with **NVIDIA's [Instant NGP](https://github.com/NVlabs/instant-ngp)** version of NeRF.
90 |
91 | The `File Format` property can either be **NGP** or **NeRF**. The **NGP** file format convention is the same as the **NeRF** one, with a few additional parameters which can be accessed by Instant NGP.
92 |
93 | Notice that each method has its distinctive `Name` property (by default set to `dataset`) corresponding to the dataset name and created **ZIP** filename for the respective method. Please note that unsupported characters, such as spaces, `#` or `/`, will automatically be replaced by an underscore.
94 |
95 | Below are described the properties specific to each method (the `Name` property is left out, since already discussed above).
96 |
97 | ### How to SOF
98 |
99 | * `Frame Step` (by default set to **3**) : **N** (as defined in the [Setting](#setting) section) = frequency at which the training frames are registered
100 | * `Camera` (always set to the active camera) : camera used for registering training and testing data
101 | * `PLAY SOF` : play the **Subset of Frames** method operator to export NeRF data
102 |
103 | ### How to TTC
104 |
105 | * `Frames` (by default set to **100**) : number of training frames used from the training camera
106 | * `Train Cam` (empty by default) : camera used for registering the training data
107 | * `Test Cam` (empty by default) : camera used for registering the testing data
108 | * `PLAY TTC` : play the **Train and Test Cameras** method operator to export NeRF data
109 |
110 | `Frames` amount of training frames will be captured using the `Train Cam` object, starting from the scene start frame.
111 |
112 | ### How to COS
113 |
114 | * `Camera` (always set to the active camera) : camera used for registering the testing data
115 | * `Location` (by default set to **0 m** vector) : center position of the training sphere from which camera views are sampled
116 | * `Rotation` (by default set to **0°** vector) : rotation of the training sphere from which camera views are sampled
117 | * `Scale` (by default set to **1** vector) : scale vector of the training sphere in xyz axes
118 | * `Radius` (by default set to **4 m**) : radius scalar of the training sphere
119 | * `Lens` (by default set to **50 mm**) : focal length of the training camera
120 | * `Seed` (by default set to **0**) : seed to initialize the random camera view sampling procedure
121 | * `Frames` (by default set to **100**) : number of training frames sampled and rendered from the training sphere
122 | * `Sphere` (deactivated by default) : whether to show the training sphere from which random views will be sampled
123 | * `Camera` (deactivated by default) : whether to show the camera used for registering the training data
124 | * `Upper Views` (deactivated by default) : whether to sample views from the upper training hemisphere only (rotation variant)
125 | * `Outwards` (deactivated by default) : whether to point the camera outwards of the training sphere
126 | * `PLAY COS` : play the **Camera on Sphere** method operator to export NeRF data
127 |
128 | Note that activating the `Sphere` and `Camera` properties creates a `BlenderNeRF Sphere` empty object and a `BlenderNeRF Camera` camera object respectively. Please do not create any objects with these names manually, since this might break the add-on functionalities.
129 |
130 | `Frames` amount of training frames will be captured using the `BlenderNeRF Camera` object, starting from the scene start frame. Finally, keep in mind that the training camera is locked in place and cannot manually be moved.
131 |
132 |
133 | ## Tips for Optimal Results
134 |
135 | NVIDIA provides a few helpful tips on how to train a NeRF model using [Instant NGP](https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md). Feel free to visit their repository for further help. Below are some quick tips for optimal **nerfing** gained from personal experience.
136 |
137 | * NeRF trains best with 50 to 150 images
138 | * Testing views should not deviate too much from training views
139 | * Scene movement, motion blur or blurring artefacts can degrade the reconstruction quality
140 | * The captured scene should be at least one Blender unit away from the camera
141 | * Keep `AABB` as tight as possible to the scene scale, higher values will slow down training
142 | * If the reconstruction quality appears blurry, start by adjusting `AABB` while keeping it a power of 2
143 | * Avoid adjusting the camera focal lengths during the animation, the vanilla NeRF methods do not support multiple focal lengths
144 | * Avoid extreme focal lengths, values between 30 mm and 70 mm work well in practice
145 | * A `Vertical` camera sensor fit sometimes leads to distorted NeRF volumes, avoid it if possible
146 |
147 |
148 | ## How to NeRF
149 |
150 | If you have access to an NVIDIA GPU, you might want to install [Instant NGP](https://github.com/NVlabs/instant-ngp#installation) on your own device for an optimal user experience, by following the instructions provided on their repository. Otherwise, you can run NeRF in a COLAB notebook on Google GPUs for free with a Google account.
151 |
152 | Open this [COLAB notebook](https://colab.research.google.com/drive/1dQInHx0Eg5LZUpnhEfoHDP77bCMwAPab?usp=sharing) (also downloadable [here](https://gist.github.com/maximeraafat/122a63c81affd6d574c67d187b82b0b0)) and follow the instructions.
153 |
154 |
155 | ## Remarks
156 |
157 | This add-on is being developed as a fun side project over the course of multiple months and versions of Blender, mainly on macOS. If you encounter any issues with the plugin functionalities, feel free to open a GitHub issue with a clear description of the problem, which **BlenderNeRF** version the issues have been experienced with, and any further information if relevant.
158 |
159 | ### Real World Data
160 |
161 | While this extension is intended for synthetic datasets creation, existing tools for importing motion tracking data from real world cameras are available. One such example is **[Tracky](https://github.com/Shopify/tracky)** by **Shopify**, an open source iOS app and an adjacent Blender plugin recording motion tracking data from an ARKit session on iPhone. Keep in mind however that tracking data can be subject to drifts and inaccuracies, which might affect the resulting NeRF reconstruction quality.
162 |
163 |
164 | ## Citation
165 |
166 | If you find this repository useful in your research, please consider citing **BlenderNeRF** using the dedicated GitHub button above. If you made use of this extension for your artistic projects, feel free to share some of your work using the `#blendernerf` hashtag on social media! :)
167 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | import bpy
2 | from . import helper, blender_nerf_ui, sof_ui, ttc_ui, cos_ui, sof_operator, ttc_operator, cos_operator
3 |
4 |
5 | # blender info
6 | bl_info = {
7 | 'name': 'BlenderNeRF',
8 | 'description': 'Easy NeRF synthetic dataset creation within Blender',
9 | 'author': 'Maxime Raafat',
10 | 'version': (6, 0, 0),
11 | 'blender': (4, 0, 0),
12 | 'location': '3D View > N panel > BlenderNeRF',
13 | 'doc_url': 'https://github.com/maximeraafat/BlenderNeRF',
14 | 'category': 'Object',
15 | }
16 |
17 | # global addon script variables
18 | TRAIN_CAM = 'Train Cam'
19 | TEST_CAM = 'Test Cam'
20 | VERSION = '.'.join(str(x) for x in bl_info['version'])
21 |
22 | # addon blender properties
23 | PROPS = [
24 | # global controllable properties
25 | ('train_data', bpy.props.BoolProperty(name='Train', description='Construct the training data', default=True) ),
26 | ('test_data', bpy.props.BoolProperty(name='Test', description='Construct the testing data', default=True) ),
27 | ('aabb', bpy.props.IntProperty(name='AABB', description='AABB scale as defined in Instant NGP', default=4, soft_min=1, soft_max=128) ),
28 | ('render_frames', bpy.props.BoolProperty(name='Render Frames', description='Whether training frames should be rendered. If not selected, only the transforms.json files will be generated', default=True) ),
29 | ('logs', bpy.props.BoolProperty(name='Save Log File', description='Whether to create a log file containing information on the BlenderNeRF run', default=False) ),
30 | ('splats', bpy.props.BoolProperty(name='Gaussian Points', description='Whether to export a points3d.ply file for Gaussian Splatting', default=False) ),
31 | ('splats_test_dummy', bpy.props.BoolProperty(name='Dummy Test Camera', description='Whether to export a dummy test transforms.json file or the full set of test camera poses', default=True) ),
32 | ('nerf', bpy.props.BoolProperty(name='NeRF', description='Whether to export the camera transforms.json files in the defaut NeRF file format convention', default=False) ),
33 | ('save_path', bpy.props.StringProperty(name='Save Path', description='Path to the output directory in which the synthetic dataset will be stored', subtype='DIR_PATH') ),
34 |
35 | # global automatic properties
36 | ('init_frame_step', bpy.props.IntProperty(name='Initial Frame Step') ),
37 | ('init_output_path', bpy.props.StringProperty(name='Initial Output Path', subtype='DIR_PATH') ),
38 | ('rendering', bpy.props.BoolVectorProperty(name='Rendering', description='Whether one of the SOF, TTC or COS methods is rendering', default=(False, False, False), size=3) ),
39 | ('blendernerf_version', bpy.props.StringProperty(name='BlenderNeRF Version', default=VERSION) ),
40 |
41 | # sof properties
42 | ('sof_dataset_name', bpy.props.StringProperty(name='Name', description='Name of the SOF dataset : the data will be stored under /', default='dataset') ),
43 | ('train_frame_steps', bpy.props.IntProperty(name='Frame Step', description='Frame step N for the captured training frames. Every N-th frame will be used for training NeRF', default=3, soft_min=1) ),
44 |
45 | # ttc properties
46 | ('ttc_dataset_name', bpy.props.StringProperty(name='Name', description='Name of the TTC dataset : the data will be stored under /', default='dataset') ),
47 | ('ttc_nb_frames', bpy.props.IntProperty(name='Frames', description='Number of training frames from the training camera', default=100, soft_min=1) ),
48 | ('camera_train_target', bpy.props.PointerProperty(type=bpy.types.Object, name=TRAIN_CAM, description='Pointer to the training camera', poll=helper.poll_is_camera) ),
49 | ('camera_test_target', bpy.props.PointerProperty(type=bpy.types.Object, name=TEST_CAM, description='Pointer to the testing camera', poll=helper.poll_is_camera) ),
50 |
51 | # cos controllable properties
52 | ('cos_dataset_name', bpy.props.StringProperty(name='Name', description='Name of the COS dataset : the data will be stored under /', default='dataset') ),
53 | ('sphere_location', bpy.props.FloatVectorProperty(name='Location', description='Center position of the training sphere', unit='LENGTH', update=helper.properties_ui_upd) ),
54 | ('sphere_rotation', bpy.props.FloatVectorProperty(name='Rotation', description='Rotation of the training sphere', unit='ROTATION', update=helper.properties_ui_upd) ),
55 | ('sphere_scale', bpy.props.FloatVectorProperty(name='Scale', description='Scale of the training sphere in xyz axes', default=(1.0, 1.0, 1.0), update=helper.properties_ui_upd) ),
56 | ('sphere_radius', bpy.props.FloatProperty(name='Radius', description='Radius scale of the training sphere', default=4.0, soft_min=0.01, unit='LENGTH', update=helper.properties_ui_upd) ),
57 | ('focal', bpy.props.FloatProperty(name='Lens', description='Focal length of the training camera', default=50, soft_min=1, soft_max=5000, unit='CAMERA', update=helper.properties_ui_upd) ),
58 | ('seed', bpy.props.IntProperty(name='Seed', description='Random seed for sampling views on the training sphere', default=0) ),
59 | ('cos_nb_frames', bpy.props.IntProperty(name='Frames', description='Number of training frames randomly sampled from the training sphere', default=100, soft_min=1) ),
60 | ('show_sphere', bpy.props.BoolProperty(name='Sphere', description='Whether to show the training sphere from which random views will be sampled', default=False, update=helper.visualize_sphere) ),
61 | ('show_camera', bpy.props.BoolProperty(name='Camera', description='Whether to show the training camera', default=False, update=helper.visualize_camera) ),
62 | ('upper_views', bpy.props.BoolProperty(name='Upper Views', description='Whether to sample views from the upper hemisphere of the training sphere only', default=False) ),
63 | ('outwards', bpy.props.BoolProperty(name='Outwards', description='Whether to point the camera outwards of the training sphere', default=False, update=helper.properties_ui_upd) ),
64 |
65 | # cos automatic properties
66 | ('sphere_exists', bpy.props.BoolProperty(name='Sphere Exists', description='Whether the sphere exists', default=False) ),
67 | ('init_sphere_exists', bpy.props.BoolProperty(name='Init sphere exists', description='Whether the sphere initially exists', default=False) ),
68 | ('camera_exists', bpy.props.BoolProperty(name='Camera Exists', description='Whether the camera exists', default=False) ),
69 | ('init_camera_exists', bpy.props.BoolProperty(name='Init camera exists', description='Whether the camera initially exists', default=False) ),
70 | ('init_active_camera', bpy.props.PointerProperty(type=bpy.types.Object, name='Init active camera', description='Pointer to initial active camera', poll=helper.poll_is_camera) ),
71 | ('init_frame_end', bpy.props.IntProperty(name='Initial Frame End') ),
72 | ]
73 |
74 | # classes to register / unregister
75 | CLASSES = [
76 | blender_nerf_ui.BlenderNeRF_UI,
77 | sof_ui.SOF_UI,
78 | ttc_ui.TTC_UI,
79 | cos_ui.COS_UI,
80 | sof_operator.SubsetOfFrames,
81 | ttc_operator.TrainTestCameras,
82 | cos_operator.CameraOnSphere
83 | ]
84 |
85 | # load addon
86 | def register():
87 | for (prop_name, prop_value) in PROPS:
88 | setattr(bpy.types.Scene, prop_name, prop_value)
89 |
90 | for cls in CLASSES:
91 | bpy.utils.register_class(cls)
92 |
93 | bpy.app.handlers.render_complete.append(helper.post_render)
94 | bpy.app.handlers.render_cancel.append(helper.post_render)
95 | bpy.app.handlers.frame_change_post.append(helper.cos_camera_update)
96 | bpy.app.handlers.depsgraph_update_post.append(helper.properties_desgraph_upd)
97 | bpy.app.handlers.depsgraph_update_post.append(helper.set_init_props)
98 |
99 | # deregister addon
100 | def unregister():
101 | for (prop_name, _) in PROPS:
102 | delattr(bpy.types.Scene, prop_name)
103 |
104 | bpy.app.handlers.render_complete.remove(helper.post_render)
105 | bpy.app.handlers.render_cancel.remove(helper.post_render)
106 | bpy.app.handlers.frame_change_post.remove(helper.cos_camera_update)
107 | bpy.app.handlers.depsgraph_update_post.remove(helper.properties_desgraph_upd)
108 | # bpy.app.handlers.depsgraph_update_post.remove(helper.set_init_props)
109 |
110 | for cls in CLASSES:
111 | bpy.utils.unregister_class(cls)
112 |
113 |
114 | if __name__ == '__main__':
115 | register()
--------------------------------------------------------------------------------
/blender_manifest.toml:
--------------------------------------------------------------------------------
1 | id = "BlenderNeRF"
2 | name = "BlenderNeRF"
3 | version = "6.0.1"
4 | type = "add-on"
5 |
6 | tagline = "Easy NeRF synthetic dataset creation within Blender"
7 | maintainer = "Maxime Raafat "
8 | website = "https://github.com/maximeraafat/BlenderNeRF"
9 |
10 | schema_version = "1.0.0"
11 | blender_version_min = "4.2.0"
12 | # blender_version_max = "5.1.0"
13 |
14 | # license conforming to https://spdx.org/licenses/ (use "SPDX: prefix)
15 | # https://docs.blender.org/manual/en/dev/advanced/extensions/licenses.html
16 | license = ["SPDX:MIT",]
--------------------------------------------------------------------------------
/blender_nerf_operator.py:
--------------------------------------------------------------------------------
1 | import os
2 | import math
3 | import json
4 | import datetime
5 | import bpy
6 |
7 |
8 | # global addon script variables
9 | OUTPUT_TRAIN = 'train'
10 | OUTPUT_TEST = 'test'
11 | CAMERA_NAME = 'BlenderNeRF Camera'
12 | TMP_VERTEX_COLORS = 'blendernerf_vertex_colors_tmp'
13 |
14 |
15 | # blender nerf operator parent class
16 | class BlenderNeRF_Operator(bpy.types.Operator):
17 |
18 | # camera intrinsics
19 | def get_camera_intrinsics(self, scene, camera):
20 | camera_angle_x = camera.data.angle_x
21 | camera_angle_y = camera.data.angle_y
22 |
23 | # camera properties
24 | f_in_mm = camera.data.lens # focal length in mm
25 | scale = scene.render.resolution_percentage / 100
26 | width_res_in_px = scene.render.resolution_x * scale # width
27 | height_res_in_px = scene.render.resolution_y * scale # height
28 | optical_center_x = width_res_in_px / 2
29 | optical_center_y = height_res_in_px / 2
30 |
31 | # pixel aspect ratios
32 | size_x = scene.render.pixel_aspect_x * width_res_in_px
33 | size_y = scene.render.pixel_aspect_y * height_res_in_px
34 | pixel_aspect_ratio = scene.render.pixel_aspect_x / scene.render.pixel_aspect_y
35 |
36 | # sensor fit and sensor size (and camera angle swap in specific cases)
37 | if camera.data.sensor_fit == 'AUTO':
38 | sensor_size_in_mm = camera.data.sensor_height if width_res_in_px < height_res_in_px else camera.data.sensor_width
39 | if width_res_in_px < height_res_in_px:
40 | sensor_fit = 'VERTICAL'
41 | camera_angle_x, camera_angle_y = camera_angle_y, camera_angle_x
42 | elif width_res_in_px > height_res_in_px:
43 | sensor_fit = 'HORIZONTAL'
44 | else:
45 | sensor_fit = 'VERTICAL' if size_x <= size_y else 'HORIZONTAL'
46 |
47 | else:
48 | sensor_fit = camera.data.sensor_fit
49 | if sensor_fit == 'VERTICAL':
50 | sensor_size_in_mm = camera.data.sensor_height if width_res_in_px <= height_res_in_px else camera.data.sensor_width
51 | if width_res_in_px <= height_res_in_px:
52 | camera_angle_x, camera_angle_y = camera_angle_y, camera_angle_x
53 |
54 | # focal length for horizontal sensor fit
55 | if sensor_fit == 'HORIZONTAL':
56 | sensor_size_in_mm = camera.data.sensor_width
57 | s_u = f_in_mm / sensor_size_in_mm * width_res_in_px
58 | s_v = f_in_mm / sensor_size_in_mm * width_res_in_px * pixel_aspect_ratio
59 |
60 | # focal length for vertical sensor fit
61 | if sensor_fit == 'VERTICAL':
62 | s_u = f_in_mm / sensor_size_in_mm * width_res_in_px / pixel_aspect_ratio
63 | s_v = f_in_mm / sensor_size_in_mm * width_res_in_px
64 |
65 | camera_intr_dict = {
66 | 'camera_angle_x': camera_angle_x,
67 | 'camera_angle_y': camera_angle_y,
68 | 'fl_x': s_u,
69 | 'fl_y': s_v,
70 | 'k1': 0.0,
71 | 'k2': 0.0,
72 | 'p1': 0.0,
73 | 'p2': 0.0,
74 | 'cx': optical_center_x,
75 | 'cy': optical_center_y,
76 | 'w': width_res_in_px,
77 | 'h': height_res_in_px,
78 | 'aabb_scale': scene.aabb
79 | }
80 |
81 | return {'camera_angle_x': camera_angle_x} if scene.nerf else camera_intr_dict
82 |
83 | # camera extrinsics (transform matrices)
84 | def get_camera_extrinsics(self, scene, camera, mode='TRAIN', method='SOF'):
85 | assert mode == 'TRAIN' or mode == 'TEST'
86 | assert method == 'SOF' or method == 'TTC' or method == 'COS'
87 |
88 | if scene.splats and scene.splats_test_dummy and mode == 'TEST':
89 | return []
90 |
91 | initFrame = scene.frame_current
92 | step = scene.train_frame_steps if (mode == 'TRAIN' and method == 'SOF') else scene.frame_step
93 | if (mode == 'TRAIN' and method == 'COS'):
94 | end = scene.frame_start + scene.cos_nb_frames - 1
95 | elif (mode == 'TRAIN' and method == 'TTC'):
96 | end = scene.frame_start + scene.ttc_nb_frames - 1
97 | else:
98 | end = scene.frame_end
99 |
100 | camera_extr_dict = []
101 | for frame in range(scene.frame_start, end + 1, step):
102 | scene.frame_set(frame)
103 | filename = os.path.basename( scene.render.frame_path(frame=frame) )
104 | filedir = OUTPUT_TRAIN * (mode == 'TRAIN') + OUTPUT_TEST * (mode == 'TEST')
105 |
106 | frame_data = {
107 | 'file_path': os.path.join(filedir, os.path.splitext(filename)[0] if scene.splats else filename),
108 | 'transform_matrix': self.listify_matrix(camera.matrix_world)
109 | }
110 |
111 | camera_extr_dict.append(frame_data)
112 |
113 | scene.frame_set(initFrame) # set back to initial frame
114 |
115 | return camera_extr_dict
116 |
117 | # export vertex colors for each visible mesh
118 | def save_splats_ply(self, scene, directory):
119 | # create temporary vertex colors
120 | for obj in scene.objects:
121 | if obj.type == 'MESH':
122 | if not obj.data.vertex_colors:
123 | obj.data.vertex_colors.new(name=TMP_VERTEX_COLORS)
124 |
125 | if bpy.context.object is None or bpy.context.active_object is None:
126 | self.report({'INFO'}, 'No object active. Setting first object as active.')
127 | bpy.context.view_layer.objects.active = bpy.data.objects[0]
128 |
129 | init_mode = bpy.context.object.mode
130 | bpy.ops.object.mode_set(mode='OBJECT')
131 |
132 | init_active_object = bpy.context.active_object
133 | init_selected_objects = bpy.context.selected_objects
134 | bpy.ops.object.select_all(action='DESELECT')
135 |
136 | # select only visible meshes
137 | for obj in scene.objects:
138 | if obj.type == 'MESH' and self.is_object_visible(obj):
139 | obj.select_set(True)
140 |
141 | # save ply file
142 | bpy.ops.wm.ply_export(filepath=os.path.join(directory, 'points3d.ply'), export_normals=True, export_attributes=False, ascii_format=True)
143 |
144 | # remove temporary vertex colors
145 | for obj in scene.objects:
146 | if obj.type == 'MESH' and self.is_object_visible(obj):
147 | if obj.data.vertex_colors and TMP_VERTEX_COLORS in obj.data.vertex_colors:
148 | obj.data.vertex_colors.remove(obj.data.vertex_colors[TMP_VERTEX_COLORS])
149 |
150 | bpy.context.view_layer.objects.active = init_active_object
151 | bpy.ops.object.select_all(action='DESELECT')
152 |
153 | # reselect previously selected objects
154 | for obj in init_selected_objects:
155 | obj.select_set(True)
156 |
157 | bpy.ops.object.mode_set(mode=init_mode)
158 |
159 | def save_json(self, directory, filename, data, indent=4):
160 | filepath = os.path.join(directory, filename)
161 | with open(filepath, 'w') as file:
162 | json.dump(data, file, indent=indent)
163 |
164 | def is_power_of_two(self, x):
165 | return math.log2(x).is_integer()
166 |
167 | # function from original nerf 360_view.py code for blender
168 | def listify_matrix(self, matrix):
169 | matrix_list = []
170 | for row in matrix:
171 | matrix_list.append(list(row))
172 | return matrix_list
173 |
174 | # check whether an object is visible in render
175 | def is_object_visible(self, obj):
176 | if obj.hide_render:
177 | return False
178 |
179 | for collection in obj.users_collection:
180 | if collection.hide_render:
181 | return False
182 |
183 | return True
184 |
185 | # assert messages
186 | def asserts(self, scene, method='SOF'):
187 | assert method == 'SOF' or method == 'TTC' or method == 'COS'
188 |
189 | camera = scene.camera
190 | train_camera = scene.camera_train_target
191 | test_camera = scene.camera_test_target
192 |
193 | sof_name = scene.sof_dataset_name
194 | ttc_name = scene.ttc_dataset_name
195 | cos_name = scene.cos_dataset_name
196 |
197 | error_messages = []
198 |
199 | if (method == 'SOF' or method == 'COS') and not camera.data.type == 'PERSP':
200 | error_messages.append('Only perspective cameras are supported!')
201 |
202 | if method == 'TTC' and not (train_camera.data.type == 'PERSP' and test_camera.data.type == 'PERSP'):
203 | error_messages.append('Only perspective cameras are supported!')
204 |
205 | if method == 'COS' and CAMERA_NAME in scene.objects.keys():
206 | sphere_camera = scene.objects[CAMERA_NAME]
207 | if not sphere_camera.data.type == 'PERSP':
208 | error_messages.append('BlenderNeRF Camera must remain a perspective camera!')
209 |
210 | if (method == 'SOF' and sof_name == '') or (method == 'TTC' and ttc_name == '') or (method == 'COS' and cos_name == ''):
211 | error_messages.append('Dataset name cannot be empty!')
212 |
213 | if method == 'COS' and any(x == 0 for x in scene.sphere_scale):
214 | error_messages.append('The BlenderNeRF Sphere cannot be flat! Change its scale to be non zero in all axes.')
215 |
216 | if not scene.nerf and not self.is_power_of_two(scene.aabb):
217 | error_messages.append('AABB scale needs to be a power of two!')
218 |
219 | if scene.save_path == '':
220 | error_messages.append('Save path cannot be empty!')
221 |
222 | if scene.splats and not scene.test_data:
223 | error_messages.append('Gaussian Splatting requires test data!')
224 |
225 | if scene.splats and scene.render.image_settings.file_format != 'PNG':
226 | error_messages.append('Gaussian Splatting requires PNG file extensions!')
227 |
228 | return error_messages
229 |
230 | def save_log_file(self, scene, directory, method='SOF'):
231 | assert method == 'SOF' or method == 'TTC' or method == 'COS'
232 | now = datetime.datetime.now()
233 |
234 | logdata = {
235 | 'BlenderNeRF Version': scene.blendernerf_version,
236 | 'Date and Time' : now.strftime("%d/%m/%Y %H:%M:%S"),
237 | 'Train': scene.train_data,
238 | 'Test': scene.test_data,
239 | 'AABB': scene.aabb,
240 | 'Render Frames': scene.render_frames,
241 | 'File Format': 'NeRF' if scene.nerf else 'NGP',
242 | 'Save Path': scene.save_path,
243 | 'Method': method
244 | }
245 |
246 | if method == 'SOF':
247 | logdata['Frame Step'] = scene.train_frame_steps
248 | logdata['Camera'] = scene.camera.name
249 | logdata['Dataset Name'] = scene.sof_dataset_name
250 |
251 | elif method == 'TTC':
252 | logdata['Train Camera Name'] = scene.camera_train_target.name
253 | logdata['Test Camera Name'] = scene.camera_test_target.name
254 | logdata['Frames'] = scene.ttc_nb_frames
255 | logdata['Dataset Name'] = scene.ttc_dataset_name
256 |
257 | else:
258 | logdata['Camera'] = scene.camera.name
259 | logdata['Location'] = str(list(scene.sphere_location))
260 | logdata['Rotation'] = str(list(scene.sphere_rotation))
261 | logdata['Scale'] = str(list(scene.sphere_scale))
262 | logdata['Radius'] = scene.sphere_radius
263 | logdata['Lens'] = str(scene.focal) + ' mm'
264 | logdata['Seed'] = scene.seed
265 | logdata['Frames'] = scene.cos_nb_frames
266 | logdata['Upper Views'] = scene.upper_views
267 | logdata['Outwards'] = scene.outwards
268 | logdata['Dataset Name'] = scene.cos_dataset_name
269 |
270 | self.save_json(directory, filename='log.txt', data=logdata)
--------------------------------------------------------------------------------
/blender_nerf_ui.py:
--------------------------------------------------------------------------------
1 | import bpy
2 |
3 |
4 | # blender nerf shared ui properties class
5 | class BlenderNeRF_UI(bpy.types.Panel):
6 | '''BlenderNeRF UI'''
7 | bl_idname = 'VIEW3D_PT_blender_nerf_ui'
8 | bl_label = 'BlenderNeRF shared UI'
9 | bl_space_type = 'VIEW_3D'
10 | bl_region_type = 'UI'
11 | bl_category = 'BlenderNeRF'
12 |
13 | def draw(self, context):
14 | layout = self.layout
15 | scene = context.scene
16 |
17 | layout.alignment = 'CENTER'
18 |
19 | row = layout.row(align=True)
20 | row.prop(scene, 'train_data', toggle=True)
21 | row.prop(scene, 'test_data', toggle=True)
22 |
23 | if not (scene.train_data or scene.test_data):
24 | layout.label(text='Nothing will happen!')
25 |
26 | else:
27 | layout.prop(scene, 'aabb')
28 |
29 | if scene.train_data:
30 | layout.separator()
31 | layout.prop(scene, 'render_frames')
32 |
33 | layout.prop(scene, 'logs')
34 | layout.prop(scene, 'splats', text='Gaussian Points (PLY file)')
35 |
36 | if scene.splats:
37 | layout.separator()
38 | layout.label(text='Gaussian Test Camera Poses')
39 | row = layout.row(align=True)
40 | row.prop(scene, 'splats_test_dummy', toggle=True, text='Dummy')
41 | row.prop(scene, 'splats_test_dummy', toggle=True, text='Full', invert_checkbox=True)
42 |
43 | layout.separator()
44 | layout.label(text='File Format')
45 |
46 | row = layout.row(align=True)
47 | row.prop(scene, 'nerf', toggle=True, text='NGP', invert_checkbox=True)
48 | row.prop(scene, 'nerf', toggle=True)
49 |
50 | layout.separator()
51 | layout.use_property_split = True
52 | layout.prop(scene, 'save_path')
--------------------------------------------------------------------------------
/cos_operator.py:
--------------------------------------------------------------------------------
1 | import os
2 | import shutil
3 | import bpy
4 | from . import helper, blender_nerf_operator
5 |
6 |
7 | # global addon script variables
8 | EMPTY_NAME = 'BlenderNeRF Sphere'
9 | CAMERA_NAME = 'BlenderNeRF Camera'
10 |
11 | # camera on sphere operator class
12 | class CameraOnSphere(blender_nerf_operator.BlenderNeRF_Operator):
13 | '''Camera on Sphere Operator'''
14 | bl_idname = 'object.camera_on_sphere'
15 | bl_label = 'Camera on Sphere COS'
16 |
17 | def execute(self, context):
18 | scene = context.scene
19 | camera = scene.camera
20 |
21 | # check if camera is selected : next errors depend on an existing camera
22 | if camera == None:
23 | self.report({'ERROR'}, 'Be sure to have a selected camera!')
24 | return {'FINISHED'}
25 |
26 | # if there is an error, print first error message
27 | error_messages = self.asserts(scene, method='COS')
28 | if len(error_messages) > 0:
29 | self.report({'ERROR'}, error_messages[0])
30 | return {'FINISHED'}
31 |
32 | output_data = self.get_camera_intrinsics(scene, camera)
33 |
34 | # clean directory name (unsupported characters replaced) and output path
35 | output_dir = bpy.path.clean_name(scene.cos_dataset_name)
36 | output_path = os.path.join(scene.save_path, output_dir)
37 | os.makedirs(output_path, exist_ok=True)
38 |
39 | if scene.logs: self.save_log_file(scene, output_path, method='COS')
40 | if scene.splats: self.save_splats_ply(scene, output_path)
41 |
42 | # initial property might have changed since set_init_props update
43 | scene.init_output_path = scene.render.filepath
44 |
45 | # other intial properties
46 | scene.init_sphere_exists = scene.show_sphere
47 | scene.init_camera_exists = scene.show_camera
48 | scene.init_frame_end = scene.frame_end
49 | scene.init_active_camera = camera
50 |
51 | if scene.test_data:
52 | # testing transforms
53 | output_data['frames'] = self.get_camera_extrinsics(scene, camera, mode='TEST', method='COS')
54 | self.save_json(output_path, 'transforms_test.json', output_data)
55 |
56 | if scene.train_data:
57 | if not scene.show_camera: scene.show_camera = True
58 |
59 | # train camera on sphere
60 | sphere_camera = scene.objects[CAMERA_NAME]
61 | sphere_output_data = self.get_camera_intrinsics(scene, sphere_camera)
62 | scene.camera = sphere_camera
63 |
64 | # training transforms
65 | sphere_output_data['frames'] = self.get_camera_extrinsics(scene, sphere_camera, mode='TRAIN', method='COS')
66 | self.save_json(output_path, 'transforms_train.json', sphere_output_data)
67 |
68 | # rendering
69 | if scene.render_frames:
70 | output_train = os.path.join(output_path, 'train')
71 | os.makedirs(output_train, exist_ok=True)
72 | scene.rendering = (False, False, True)
73 | scene.frame_end = scene.frame_start + scene.cos_nb_frames - 1 # update end frame
74 | scene.render.filepath = os.path.join(output_train, '') # training frames path
75 | bpy.ops.render.render('INVOKE_DEFAULT', animation=True, write_still=True) # render scene
76 |
77 | # if frames are rendered, the below code is executed by the handler function
78 | if not any(scene.rendering):
79 | # reset camera settings
80 | if not scene.init_camera_exists: helper.delete_camera(scene, CAMERA_NAME)
81 | if not scene.init_sphere_exists:
82 | objects = bpy.data.objects
83 | objects.remove(objects[EMPTY_NAME], do_unlink=True)
84 | scene.show_sphere = False
85 | scene.sphere_exists = False
86 |
87 | scene.camera = scene.init_active_camera
88 |
89 | # compress dataset and remove folder (only keep zip)
90 | shutil.make_archive(output_path, 'zip', output_path) # output filename = output_path
91 | shutil.rmtree(output_path)
92 |
93 | return {'FINISHED'}
--------------------------------------------------------------------------------
/cos_ui.py:
--------------------------------------------------------------------------------
1 | import bpy
2 |
3 |
4 | # camera on sphere ui class
5 | class COS_UI(bpy.types.Panel):
6 | '''Camera on Sphere UI'''
7 | bl_idname = 'VIEW3D_PT_cos_ui'
8 | bl_label = 'Camera on Sphere COS'
9 | bl_space_type = 'VIEW_3D'
10 | bl_region_type = 'UI'
11 | bl_category = 'BlenderNeRF'
12 | bl_options = {'DEFAULT_CLOSED'}
13 |
14 | def draw(self, context):
15 | layout = self.layout
16 | scene = context.scene
17 |
18 | layout.alignment = 'CENTER'
19 |
20 | layout.use_property_split = True
21 | layout.prop(scene, 'camera')
22 | layout.prop(scene, 'sphere_location')
23 | layout.prop(scene, 'sphere_rotation')
24 | layout.prop(scene, 'sphere_scale')
25 | layout.prop(scene, 'sphere_radius')
26 | layout.prop(scene, 'focal')
27 | layout.prop(scene, 'seed')
28 |
29 | layout.prop(scene, 'cos_nb_frames')
30 | layout.prop(scene, 'upper_views', toggle=True)
31 | layout.prop(scene, 'outwards', toggle=True)
32 |
33 | layout.use_property_split = False
34 | layout.separator()
35 | layout.label(text='Preview')
36 |
37 | row = layout.row(align=True)
38 | row.prop(scene, 'show_sphere', toggle=True)
39 | row.prop(scene, 'show_camera', toggle=True)
40 |
41 | layout.separator()
42 | layout.use_property_split = True
43 | layout.prop(scene, 'cos_dataset_name')
44 |
45 | layout.separator()
46 | layout.operator('object.camera_on_sphere', text='PLAY COS')
--------------------------------------------------------------------------------
/helper.py:
--------------------------------------------------------------------------------
1 | import os
2 | import shutil
3 | import random
4 | import math
5 | import mathutils
6 | import bpy
7 | from bpy.app.handlers import persistent
8 |
9 |
10 | # global addon script variables
11 | EMPTY_NAME = 'BlenderNeRF Sphere'
12 | CAMERA_NAME = 'BlenderNeRF Camera'
13 |
14 | ## property poll and update functions
15 |
16 | # camera pointer property poll function
17 | def poll_is_camera(self, obj):
18 | return obj.type == 'CAMERA'
19 |
20 | def visualize_sphere(self, context):
21 | scene = context.scene
22 |
23 | if EMPTY_NAME not in scene.objects.keys() and not scene.sphere_exists:
24 | # if empty sphere does not exist, create
25 | bpy.ops.object.empty_add(type='SPHERE') # non default location, rotation and scale here are sometimes not applied, so we enforce them manually below
26 | empty = context.active_object
27 | empty.name = EMPTY_NAME
28 | empty.location = scene.sphere_location
29 | empty.rotation_euler = scene.sphere_rotation
30 | empty.scale = scene.sphere_scale
31 | empty.empty_display_size = scene.sphere_radius
32 |
33 | scene.sphere_exists = True
34 |
35 | elif EMPTY_NAME in scene.objects.keys() and scene.sphere_exists:
36 | if CAMERA_NAME in scene.objects.keys() and scene.camera_exists:
37 | delete_camera(scene, CAMERA_NAME)
38 |
39 | objects = bpy.data.objects
40 | objects.remove(objects[EMPTY_NAME], do_unlink=True)
41 |
42 | scene.sphere_exists = False
43 |
44 | def visualize_camera(self, context):
45 | scene = context.scene
46 |
47 | if CAMERA_NAME not in scene.objects.keys() and not scene.camera_exists:
48 | if EMPTY_NAME not in scene.objects.keys():
49 | scene.show_sphere = True
50 |
51 | bpy.ops.object.camera_add()
52 | camera = context.active_object
53 | camera.name = CAMERA_NAME
54 | camera.data.name = CAMERA_NAME
55 | camera.location = sample_from_sphere(scene)
56 | bpy.data.cameras[CAMERA_NAME].lens = scene.focal
57 |
58 | cam_constraint = camera.constraints.new(type='TRACK_TO')
59 | cam_constraint.track_axis = 'TRACK_Z' if scene.outwards else 'TRACK_NEGATIVE_Z'
60 | cam_constraint.up_axis = 'UP_Y'
61 | cam_constraint.target = bpy.data.objects[EMPTY_NAME]
62 |
63 | scene.camera_exists = True
64 |
65 | elif CAMERA_NAME in scene.objects.keys() and scene.camera_exists:
66 | objects = bpy.data.objects
67 | objects.remove(objects[CAMERA_NAME], do_unlink=True)
68 |
69 | for block in bpy.data.cameras:
70 | if CAMERA_NAME in block.name:
71 | bpy.data.cameras.remove(block)
72 |
73 | scene.camera_exists = False
74 |
75 | def delete_camera(scene, name):
76 | objects = bpy.data.objects
77 | objects.remove(objects[name], do_unlink=True)
78 |
79 | scene.show_camera = False
80 | scene.camera_exists = False
81 |
82 | for block in bpy.data.cameras:
83 | if name in block.name:
84 | bpy.data.cameras.remove(block)
85 |
86 | # non uniform sampling when stretched or squeezed sphere
87 | def sample_from_sphere(scene):
88 | seed = (2654435761 * (scene.seed + 1)) ^ (805459861 * (scene.frame_current + 1))
89 | rng = random.Random(seed) # random number generator
90 |
91 | # sample random angles
92 | theta = rng.random() * 2 * math.pi
93 | phi = math.acos(1 - 2 * rng.random()) # ensure uniform sampling from unit sphere
94 |
95 | # uniform sample from unit sphere, given theta and phi
96 | unit_x = math.cos(theta) * math.sin(phi)
97 | unit_y = math.sin(theta) * math.sin(phi)
98 | unit_z = abs( math.cos(phi) ) if scene.upper_views else math.cos(phi)
99 | unit = mathutils.Vector((unit_x, unit_y, unit_z))
100 |
101 | # ellipsoid sample : center + rotation @ radius * unit sphere
102 | point = scene.sphere_radius * mathutils.Vector(scene.sphere_scale) * unit
103 | rotation = mathutils.Euler(scene.sphere_rotation).to_matrix()
104 | point = mathutils.Vector(scene.sphere_location) + rotation @ point
105 |
106 | return point
107 |
108 | ## two way property link between sphere and ui (property and handler functions)
109 | # https://blender.stackexchange.com/questions/261174/2-way-property-link-or-a-filtered-property-display
110 |
111 | def properties_ui_upd(self, context):
112 | can_scene_upd(self, context)
113 |
114 | @persistent
115 | def properties_desgraph_upd(scene):
116 | can_properties_upd(scene)
117 |
118 | def properties_ui(self, context):
119 | scene = context.scene
120 |
121 | if EMPTY_NAME in scene.objects.keys():
122 | upd_off()
123 | bpy.data.objects[EMPTY_NAME].location = scene.sphere_location
124 | bpy.data.objects[EMPTY_NAME].rotation_euler = scene.sphere_rotation
125 | bpy.data.objects[EMPTY_NAME].scale = scene.sphere_scale
126 | bpy.data.objects[EMPTY_NAME].empty_display_size = scene.sphere_radius
127 | upd_on()
128 |
129 | if CAMERA_NAME in scene.objects.keys():
130 | upd_off()
131 | bpy.data.cameras[CAMERA_NAME].lens = scene.focal
132 | bpy.context.scene.objects[CAMERA_NAME].constraints['Track To'].track_axis = 'TRACK_Z' if scene.outwards else 'TRACK_NEGATIVE_Z'
133 | upd_on()
134 |
135 | # if empty sphere modified outside of ui panel, edit panel properties
136 | def properties_desgraph(scene):
137 | if scene.show_sphere and EMPTY_NAME in scene.objects.keys():
138 | upd_off()
139 | scene.sphere_location = bpy.data.objects[EMPTY_NAME].location
140 | scene.sphere_rotation = bpy.data.objects[EMPTY_NAME].rotation_euler
141 | scene.sphere_scale = bpy.data.objects[EMPTY_NAME].scale
142 | scene.sphere_radius = bpy.data.objects[EMPTY_NAME].empty_display_size
143 | upd_on()
144 |
145 | if scene.show_camera and CAMERA_NAME in scene.objects.keys():
146 | upd_off()
147 | scene.focal = bpy.data.cameras[CAMERA_NAME].lens
148 | scene.outwards = (bpy.context.scene.objects[CAMERA_NAME].constraints['Track To'].track_axis == 'TRACK_Z')
149 | upd_on()
150 |
151 | if EMPTY_NAME not in scene.objects.keys() and scene.sphere_exists:
152 | if CAMERA_NAME in scene.objects.keys() and scene.camera_exists:
153 | delete_camera(scene, CAMERA_NAME)
154 |
155 | scene.show_sphere = False
156 | scene.sphere_exists = False
157 |
158 | if CAMERA_NAME not in scene.objects.keys() and scene.camera_exists:
159 | scene.show_camera = False
160 | scene.camera_exists = False
161 |
162 | for block in bpy.data.cameras:
163 | if CAMERA_NAME in block.name:
164 | bpy.data.cameras.remove(block)
165 |
166 | if CAMERA_NAME in scene.objects.keys():
167 | scene.objects[CAMERA_NAME].location = sample_from_sphere(scene)
168 |
169 | def empty_fn(self, context): pass
170 |
171 | can_scene_upd = properties_ui
172 | can_properties_upd = properties_desgraph
173 |
174 | def upd_off(): # make sub function to an empty function
175 | global can_scene_upd, can_properties_upd
176 | can_scene_upd = empty_fn
177 | can_properties_upd = empty_fn
178 | def upd_on():
179 | global can_scene_upd, can_properties_upd
180 | can_scene_upd = properties_ui
181 | can_properties_upd = properties_desgraph
182 |
183 |
184 | ## blender handler functions
185 |
186 | # reset properties back to intial
187 | @persistent
188 | def post_render(scene):
189 | if any(scene.rendering): # execute this function only when rendering with addon
190 | dataset_names = (scene.sof_dataset_name, scene.ttc_dataset_name, scene.cos_dataset_name)
191 | method_dataset_name = dataset_names[ list(scene.rendering).index(True) ]
192 |
193 | if scene.rendering[0]: scene.frame_step = scene.init_frame_step # sof : reset frame step
194 |
195 | if scene.rendering[1]: # ttc : reset frame end
196 | scene.frame_end = scene.init_frame_end
197 |
198 | if scene.rendering[2]: # cos : reset camera settings
199 | if not scene.init_camera_exists: delete_camera(scene, CAMERA_NAME)
200 | if not scene.init_sphere_exists:
201 | objects = bpy.data.objects
202 | objects.remove(objects[EMPTY_NAME], do_unlink=True)
203 | scene.show_sphere = False
204 | scene.sphere_exists = False
205 |
206 | scene.camera = scene.init_active_camera
207 | scene.frame_end = scene.init_frame_end
208 |
209 | scene.rendering = (False, False, False)
210 | scene.render.filepath = scene.init_output_path # reset filepath
211 |
212 | # clean directory name (unsupported characters replaced) and output path
213 | output_dir = bpy.path.clean_name(method_dataset_name)
214 | output_path = os.path.join(scene.save_path, output_dir)
215 |
216 | # compress dataset and remove folder (only keep zip)
217 | shutil.make_archive(output_path, 'zip', output_path) # output filename = output_path
218 | shutil.rmtree(output_path)
219 |
220 | # set initial property values (bpy.data and bpy.context require a loaded scene)
221 | @persistent
222 | def set_init_props(scene):
223 | filepath = bpy.data.filepath
224 | filename = bpy.path.basename(filepath)
225 | default_save_path = filepath[:-len(filename)] # remove file name from blender file path = directoy path
226 |
227 | scene.save_path = default_save_path
228 | scene.init_frame_step = scene.frame_step
229 | scene.init_output_path = scene.render.filepath
230 |
231 | bpy.app.handlers.depsgraph_update_post.remove(set_init_props)
232 |
233 | # update cos camera when changing frame
234 | @persistent
235 | def cos_camera_update(scene):
236 | if CAMERA_NAME in scene.objects.keys():
237 | scene.objects[CAMERA_NAME].location = sample_from_sphere(scene)
--------------------------------------------------------------------------------
/sof_operator.py:
--------------------------------------------------------------------------------
1 | import os
2 | import shutil
3 | import bpy
4 | from . import blender_nerf_operator
5 |
6 |
7 | # subset of frames operator class
8 | class SubsetOfFrames(blender_nerf_operator.BlenderNeRF_Operator):
9 | '''Subset of Frames Operator'''
10 | bl_idname = 'object.subset_of_frames'
11 | bl_label = 'Subset of Frames SOF'
12 |
13 | def execute(self, context):
14 | scene = context.scene
15 | camera = scene.camera
16 |
17 | # check if camera is selected : next errors depend on an existing camera
18 | if camera == None:
19 | self.report({'ERROR'}, 'Be sure to have a selected camera!')
20 | return {'FINISHED'}
21 |
22 | # if there is an error, print first error message
23 | error_messages = self.asserts(scene, method='SOF')
24 | if len(error_messages) > 0:
25 | self.report({'ERROR'}, error_messages[0])
26 | return {'FINISHED'}
27 |
28 | output_data = self.get_camera_intrinsics(scene, camera)
29 |
30 | # clean directory name (unsupported characters replaced) and output path
31 | output_dir = bpy.path.clean_name(scene.sof_dataset_name)
32 | output_path = os.path.join(scene.save_path, output_dir)
33 | os.makedirs(output_path, exist_ok=True)
34 |
35 | if scene.logs: self.save_log_file(scene, output_path, method='SOF')
36 | if scene.splats: self.save_splats_ply(scene, output_path)
37 |
38 | # initial properties might have changed since set_init_props update
39 | scene.init_frame_step = scene.frame_step
40 | scene.init_output_path = scene.render.filepath
41 |
42 | if scene.test_data:
43 | # testing transforms
44 | output_data['frames'] = self.get_camera_extrinsics(scene, camera, mode='TEST', method='SOF')
45 | self.save_json(output_path, 'transforms_test.json', output_data)
46 |
47 | if scene.train_data:
48 | # training transforms
49 | output_data['frames'] = self.get_camera_extrinsics(scene, camera, mode='TRAIN', method='SOF')
50 | self.save_json(output_path, 'transforms_train.json', output_data)
51 |
52 | # rendering
53 | if scene.render_frames:
54 | output_train = os.path.join(output_path, 'train')
55 | os.makedirs(output_train, exist_ok=True)
56 | scene.rendering = (True, False, False)
57 | scene.frame_step = scene.train_frame_steps # update frame step
58 | scene.render.filepath = os.path.join(output_train, '') # training frames path
59 | bpy.ops.render.render('INVOKE_DEFAULT', animation=True, write_still=True) # render scene
60 |
61 | # if frames are rendered, the below code is executed by the handler function
62 | if not any(scene.rendering):
63 | # compress dataset and remove folder (only keep zip)
64 | shutil.make_archive(output_path, 'zip', output_path) # output filename = output_path
65 | shutil.rmtree(output_path)
66 |
67 | return {'FINISHED'}
--------------------------------------------------------------------------------
/sof_ui.py:
--------------------------------------------------------------------------------
1 | import bpy
2 |
3 |
4 | # subset of frames ui class
5 | class SOF_UI(bpy.types.Panel):
6 | '''Subset of Frames UI'''
7 | bl_idname = 'VIEW3D_PT_sof_ui'
8 | bl_label = 'Subset of Frames SOF'
9 | bl_space_type = 'VIEW_3D'
10 | bl_region_type = 'UI'
11 | bl_category = 'BlenderNeRF'
12 | bl_options = {'DEFAULT_CLOSED'}
13 |
14 | def draw(self, context):
15 | layout = self.layout
16 | scene = context.scene
17 |
18 | layout.alignment = 'CENTER'
19 |
20 | layout.prop(scene, 'train_frame_steps')
21 |
22 | layout.use_property_split = True
23 | layout.prop(scene, 'camera')
24 |
25 | layout.separator()
26 | layout.prop(scene, 'sof_dataset_name')
27 |
28 | layout.separator()
29 | layout.operator('object.subset_of_frames', text='PLAY SOF')
--------------------------------------------------------------------------------
/ttc_operator.py:
--------------------------------------------------------------------------------
1 | import os
2 | import shutil
3 | import bpy
4 | from . import blender_nerf_operator
5 |
6 |
7 | # train and test cameras operator class
8 | class TrainTestCameras(blender_nerf_operator.BlenderNeRF_Operator):
9 | '''Train and Test Cameras Operator'''
10 | bl_idname = 'object.train_test_cameras'
11 | bl_label = 'Train and Test Cameras TTC'
12 |
13 | def execute(self, context):
14 | scene = context.scene
15 | train_camera = scene.camera_train_target
16 | test_camera = scene.camera_test_target
17 |
18 | # check if cameras are selected : next errors depend on existing cameras
19 | if train_camera == None or test_camera == None:
20 | self.report({'ERROR'}, 'Be sure to have selected a train and test camera!')
21 | return {'FINISHED'}
22 |
23 | # if there is an error, print first error message
24 | error_messages = self.asserts(scene, method='TTC')
25 | if len(error_messages) > 0:
26 | self.report({'ERROR'}, error_messages[0])
27 | return {'FINISHED'}
28 |
29 | output_train_data = self.get_camera_intrinsics(scene, train_camera)
30 | output_test_data = self.get_camera_intrinsics(scene, test_camera)
31 |
32 | # clean directory name (unsupported characters replaced) and output path
33 | output_dir = bpy.path.clean_name(scene.ttc_dataset_name)
34 | output_path = os.path.join(scene.save_path, output_dir)
35 | os.makedirs(output_path, exist_ok=True)
36 |
37 | if scene.logs: self.save_log_file(scene, output_path, method='TTC')
38 | if scene.splats: self.save_splats_ply(scene, output_path)
39 |
40 | # initial properties might have changed since set_init_props update
41 | scene.init_output_path = scene.render.filepath
42 | scene.init_frame_end = scene.frame_end
43 |
44 | if scene.test_data:
45 | # testing transforms
46 | output_test_data['frames'] = self.get_camera_extrinsics(scene, test_camera, mode='TEST', method='TTC')
47 | self.save_json(output_path, 'transforms_test.json', output_test_data)
48 |
49 | if scene.train_data:
50 | # training transforms
51 | output_train_data['frames'] = self.get_camera_extrinsics(scene, train_camera, mode='TRAIN', method='TTC')
52 | self.save_json(output_path, 'transforms_train.json', output_train_data)
53 |
54 | # rendering
55 | if scene.render_frames:
56 | output_train = os.path.join(output_path, 'train')
57 | os.makedirs(output_train, exist_ok=True)
58 | scene.rendering = (False, True, False)
59 | scene.frame_end = scene.frame_start + scene.ttc_nb_frames - 1 # update end frame
60 | scene.render.filepath = os.path.join(output_train, '') # training frames path
61 | bpy.ops.render.render('INVOKE_DEFAULT', animation=True, write_still=True) # render scene
62 |
63 | # if frames are rendered, the below code is executed by the handler function
64 | if not any(scene.rendering):
65 | # compress dataset and remove folder (only keep zip)
66 | shutil.make_archive(output_path, 'zip', output_path) # output filename = output_path
67 | shutil.rmtree(output_path)
68 |
69 | return {'FINISHED'}
--------------------------------------------------------------------------------
/ttc_ui.py:
--------------------------------------------------------------------------------
1 | import bpy
2 |
3 |
4 | # train and test cameras ui class
5 | class TTC_UI(bpy.types.Panel):
6 | '''Train and Test Cameras UI'''
7 | bl_idname = 'VIEW3D_PT_ttc_ui'
8 | bl_label = 'Train and Test Cameras TTC'
9 | bl_space_type = 'VIEW_3D'
10 | bl_region_type = 'UI'
11 | bl_category = 'BlenderNeRF'
12 | bl_options = {'DEFAULT_CLOSED'}
13 |
14 | def draw(self, context):
15 | layout = self.layout
16 | scene = context.scene
17 |
18 | layout.alignment = 'CENTER'
19 |
20 | layout.use_property_split = True
21 | layout.prop(scene, 'ttc_nb_frames')
22 | layout.prop_search(scene, 'camera_train_target', scene, 'objects')
23 | layout.prop_search(scene, 'camera_test_target', scene, 'objects')
24 |
25 | layout.separator()
26 | layout.prop(scene, 'ttc_dataset_name')
27 |
28 | layout.separator()
29 | layout.operator('object.train_test_cameras', text='PLAY TTC')
--------------------------------------------------------------------------------