├── .gitignore
├── README.md
├── assets
├── 0_teaser.gif
├── 1_installation_addon.gif
├── 1_installation_code.gif
├── 2_locating_panel.gif
├── 3_load_data.gif
├── 4_inspect_data.gif
├── 5_crop_point_cloud.gif
└── 6_generate_bounding_sphere.gif
├── neuralangelo_addon.py
├── start_blender_with_addon.py
└── toy_example
├── images
├── 000001.jpg
├── 000002.jpg
├── 000003.jpg
├── 000004.jpg
├── 000005.jpg
├── 000006.jpg
├── 000007.jpg
├── 000008.jpg
├── 000009.jpg
├── 000010.jpg
├── 000011.jpg
├── 000012.jpg
├── 000013.jpg
├── 000020.jpg
├── 000021.jpg
├── 000023.jpg
└── 000024.jpg
├── run-colmap-geometric.sh
├── run-colmap-photometric.sh
├── sparse
├── cameras.bin
├── images.bin
└── points3D.bin
└── stereo
├── fusion.cfg
└── patch-match.cfg
/.gitignore:
--------------------------------------------------------------------------------
1 | toy_example.MOV
2 | toy_example/images/
3 | *.blend1
4 | *.blend
5 | .idea
6 | __pycache__
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # BlenderNeuralangelo
4 |
5 |
6 |
7 | 
8 |
9 | [Blender](https://www.blender.org/) addon to inspect and preprocess COLMAP data
10 | for [Neuralangelo (CVPR 2023)](https://research.nvidia.com/labs/dir/neuralangelo/).
11 | The addon allows you to inspect the COLMAP poses, check the sparse reconstruction overlaid with RGB images, and export
12 | the bounding regions for Neuralangelo.
13 |
14 | ## Getting Started
15 |
16 | First clone the code to a local directory.
17 |
18 | ```angular2html
19 | cd /path/to/dir
20 | git clone git@github.com:mli0603/BlenderNeuralangelo.git
21 | ```
22 |
23 | ### 0: Running COLMAP
24 |
25 | Please follow the [instructions](https://github.com/NVlabs/neuralangelo/blob/main/DATA_PROCESSING.md) to obtain COLMAP results. Alternatively, an example
26 | is provided in the folder [toy_example](toy_example).
27 |
28 | If COLMAP instructions are followed correctly, a folder structure of the following should be expected:
29 |
30 | ```
31 | DATA_PATH
32 | ├─ database.db (COLMAP database)
33 | ├─ images (undistorted input images)
34 | ├─ images_raw (raw input images)
35 | ├─ sparse (COLMAP data from SfM)
36 | │ ├─ cameras.bin (camera parameters)
37 | │ ├─ images.bin (images and camera poses)
38 | │ ├─ points3D.bin (sparse point clouds)
39 | │ ├─ 0 (a directory containing individual SfM models. There could also be 1, 2... etc.)
40 | │ ...
41 | ├─ stereo (COLMAP data for MVS, not used here)
42 | ...
43 | ```
44 |
45 | ### 1.1: Run Blender from command line
46 | You can directly run Blender without worrying installation by:
47 |
48 | ```bash
49 | ./blender --python PATH_TO_BlenderNeuralangelo/start_blender_with_addon.py
50 | ```
51 |
52 | Replace `PATH_TO_BlenderNeuralangelo` with your path to `BlenderNeuralangelo`.
53 |
54 | ### 1.2: Install as Addon
55 |
56 | Installing as a Blender Addon avoids re-installation after quitting blender. To install as an Addon:
57 |
58 | 
59 |
60 | ### 1.3: Run as code
61 |
62 | Alternatively, the code can be directly run as follows:
63 |
64 | 
65 |
66 | After quitting Blender, this step must repeat.
67 |
68 | ### 2: Locating the control panel
69 |
70 | After installation, the BlenderNeuralangelo panel can be located on the right side or press `N` on the keyboard.
71 |
72 | 
73 |
74 | ### 3: Load COLMAP data
75 |
76 | COLMAP can be loaded by providing the path to the COLMAP work directory. The images, camera parameters and sparse
77 | reconstruction results are loaded.
78 |
79 | 
80 |
81 | ### 4: Inspect COLMAP data
82 |
83 | COLMAP data can be inspected qualitatively by looking at the RGB, sparse reconstruction and pose results.
84 |
85 | 
86 |
87 | ### 5: Defining region of interest
88 |
89 | A graphics interface is provided to define the region of interest for Neuralangelo as a form of bounding box. Points
90 | outside the region of interest can be cropped.
91 |
92 | 
93 |
94 | ### 6: Defining bounding sphere
95 |
96 | [Neuralangelo](https://research.nvidia.com/labs/dir/neuralangelo/) and prior
97 | works [1](https://arxiv.org/abs/2003.09852)[2](https://arxiv.org/abs/2106.10689)[3](https://arxiv.org/abs/2106.12052)
98 | on neural surface reconstruction often assumes a spherical region of interest. This addon generates a bounding sphere
99 | and exports the scene parameters (intrinsics, poses, bounding sphere) following
100 | the [Instant NGP](https://github.com/NVlabs/instant-ngp) format.
101 |
102 | 
103 |
--------------------------------------------------------------------------------
/assets/0_teaser.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/assets/0_teaser.gif
--------------------------------------------------------------------------------
/assets/1_installation_addon.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/assets/1_installation_addon.gif
--------------------------------------------------------------------------------
/assets/1_installation_code.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/assets/1_installation_code.gif
--------------------------------------------------------------------------------
/assets/2_locating_panel.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/assets/2_locating_panel.gif
--------------------------------------------------------------------------------
/assets/3_load_data.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/assets/3_load_data.gif
--------------------------------------------------------------------------------
/assets/4_inspect_data.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/assets/4_inspect_data.gif
--------------------------------------------------------------------------------
/assets/5_crop_point_cloud.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/assets/5_crop_point_cloud.gif
--------------------------------------------------------------------------------
/assets/6_generate_bounding_sphere.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/assets/6_generate_bounding_sphere.gif
--------------------------------------------------------------------------------
/neuralangelo_addon.py:
--------------------------------------------------------------------------------
1 | import collections
2 | import json
3 | import os
4 | import shutil
5 | import struct
6 |
7 | import bmesh
8 | import bpy
9 | import math
10 | import numpy as np
11 | from bpy.props import (StringProperty,
12 | BoolProperty,
13 | FloatProperty,
14 | FloatVectorProperty,
15 | PointerProperty,
16 | )
17 | from bpy.types import (Operator,
18 | PropertyGroup,
19 | )
20 | from mathutils import Matrix
21 | from typing import Union
22 |
23 | # ------------------------------------------------------------------------
24 | # COLMAP code: https://github.com/colmap/colmap/blob/dev/scripts/python/read_write_model.py
25 | # ------------------------------------------------------------------------
26 |
27 |
28 | CameraModel = collections.namedtuple(
29 | "CameraModel", ["model_id", "model_name", "num_params"])
30 | Camera = collections.namedtuple(
31 | "Camera", ["id", "model", "width", "height", "params"])
32 | BaseImage = collections.namedtuple(
33 | "Image", ["id", "qvec", "tvec", "camera_id", "name", "xys", "point3D_ids"])
34 | Point3D = collections.namedtuple(
35 | "Point3D", ["id", "xyz", "rgb", "error", "image_ids", "point2D_idxs"])
36 |
37 |
38 | class Image(BaseImage):
39 | def qvec2rotmat(self):
40 | return qvec2rotmat(self.qvec)
41 |
42 |
43 | CAMERA_MODELS = {
44 | CameraModel(model_id=0, model_name="SIMPLE_PINHOLE", num_params=3),
45 | CameraModel(model_id=1, model_name="PINHOLE", num_params=4),
46 | CameraModel(model_id=2, model_name="SIMPLE_RADIAL", num_params=4),
47 | CameraModel(model_id=3, model_name="RADIAL", num_params=5),
48 | CameraModel(model_id=4, model_name="OPENCV", num_params=8),
49 | CameraModel(model_id=5, model_name="OPENCV_FISHEYE", num_params=8),
50 | CameraModel(model_id=6, model_name="FULL_OPENCV", num_params=12),
51 | CameraModel(model_id=7, model_name="FOV", num_params=5),
52 | CameraModel(model_id=8, model_name="SIMPLE_RADIAL_FISHEYE", num_params=4),
53 | CameraModel(model_id=9, model_name="RADIAL_FISHEYE", num_params=5),
54 | CameraModel(model_id=10, model_name="THIN_PRISM_FISHEYE", num_params=12)
55 | }
56 | CAMERA_MODEL_IDS = dict([(camera_model.model_id, camera_model)
57 | for camera_model in CAMERA_MODELS])
58 | CAMERA_MODEL_NAMES = dict([(camera_model.model_name, camera_model)
59 | for camera_model in CAMERA_MODELS])
60 |
61 |
62 | def read_next_bytes(fid, num_bytes, format_char_sequence, endian_character="<"):
63 | """Read and unpack the next bytes from a binary file.
64 | :param fid:
65 | :param num_bytes: Sum of combination of {2, 4, 8}, e.g. 2, 6, 16, 30, etc.
66 | :param format_char_sequence: List of {c, e, f, d, h, H, i, I, l, L, q, Q}.
67 | :param endian_character: Any of {@, =, <, >, !}
68 | :return: Tuple of read and unpacked values.
69 | """
70 | data = fid.read(num_bytes)
71 | return struct.unpack(endian_character + format_char_sequence, data)
72 |
73 |
74 | def read_cameras_text(path):
75 | """
76 | see: src/base/reconstruction.cc
77 | void Reconstruction::WriteCamerasText(const std::string& path)
78 | void Reconstruction::ReadCamerasText(const std::string& path)
79 | """
80 | cameras = {}
81 | with open(path, "r") as fid:
82 | while True:
83 | line = fid.readline()
84 | if not line:
85 | break
86 | line = line.strip()
87 | if len(line) > 0 and line[0] != "#":
88 | elems = line.split()
89 | camera_id = int(elems[0])
90 | model = elems[1]
91 | width = int(elems[2])
92 | height = int(elems[3])
93 | params = np.array(tuple(map(float, elems[4:])))
94 | cameras[camera_id] = Camera(id=camera_id, model=model,
95 | width=width, height=height,
96 | params=params)
97 | return cameras
98 |
99 |
100 | def read_cameras_binary(path_to_model_file):
101 | """
102 | see: src/base/reconstruction.cc
103 | void Reconstruction::WriteCamerasBinary(const std::string& path)
104 | void Reconstruction::ReadCamerasBinary(const std::string& path)
105 | """
106 | cameras = {}
107 | with open(path_to_model_file, "rb") as fid:
108 | num_cameras = read_next_bytes(fid, 8, "Q")[0]
109 | for _ in range(num_cameras):
110 | camera_properties = read_next_bytes(
111 | fid, num_bytes=24, format_char_sequence="iiQQ")
112 | camera_id = camera_properties[0]
113 | model_id = camera_properties[1]
114 | model_name = CAMERA_MODEL_IDS[camera_properties[1]].model_name
115 | width = camera_properties[2]
116 | height = camera_properties[3]
117 | num_params = CAMERA_MODEL_IDS[model_id].num_params
118 | params = read_next_bytes(fid, num_bytes=8 * num_params,
119 | format_char_sequence="d" * num_params)
120 | cameras[camera_id] = Camera(id=camera_id,
121 | model=model_name,
122 | width=width,
123 | height=height,
124 | params=np.array(params))
125 | assert len(cameras) == num_cameras
126 | return cameras
127 |
128 |
129 | def read_images_text(path):
130 | """
131 | see: src/base/reconstruction.cc
132 | void Reconstruction::ReadImagesText(const std::string& path)
133 | void Reconstruction::WriteImagesText(const std::string& path)
134 | """
135 | images = {}
136 | with open(path, "r") as fid:
137 | while True:
138 | line = fid.readline()
139 | if not line:
140 | break
141 | line = line.strip()
142 | if len(line) > 0 and line[0] != "#":
143 | elems = line.split()
144 | image_id = int(elems[0])
145 | qvec = np.array(tuple(map(float, elems[1:5])))
146 | tvec = np.array(tuple(map(float, elems[5:8])))
147 | camera_id = int(elems[8])
148 | image_name = elems[9]
149 | elems = fid.readline().split()
150 | xys = np.column_stack([tuple(map(float, elems[0::3])),
151 | tuple(map(float, elems[1::3]))])
152 | point3D_ids = np.array(tuple(map(int, elems[2::3])))
153 | images[image_id] = Image(
154 | id=image_id, qvec=qvec, tvec=tvec,
155 | camera_id=camera_id, name=image_name,
156 | xys=xys, point3D_ids=point3D_ids)
157 | return images
158 |
159 |
160 | def read_images_binary(path_to_model_file):
161 | """
162 | see: src/base/reconstruction.cc
163 | void Reconstruction::ReadImagesBinary(const std::string& path)
164 | void Reconstruction::WriteImagesBinary(const std::string& path)
165 | """
166 | images = {}
167 | with open(path_to_model_file, "rb") as fid:
168 | num_reg_images = read_next_bytes(fid, 8, "Q")[0]
169 | for _ in range(num_reg_images):
170 | binary_image_properties = read_next_bytes(
171 | fid, num_bytes=64, format_char_sequence="idddddddi")
172 | image_id = binary_image_properties[0]
173 | qvec = np.array(binary_image_properties[1:5])
174 | tvec = np.array(binary_image_properties[5:8])
175 | camera_id = binary_image_properties[8]
176 | image_name = ""
177 | current_char = read_next_bytes(fid, 1, "c")[0]
178 | while current_char != b"\x00": # look for the ASCII 0 entry
179 | image_name += current_char.decode("utf-8")
180 | current_char = read_next_bytes(fid, 1, "c")[0]
181 | num_points2D = read_next_bytes(fid, num_bytes=8,
182 | format_char_sequence="Q")[0]
183 | x_y_id_s = read_next_bytes(fid, num_bytes=24 * num_points2D,
184 | format_char_sequence="ddq" * num_points2D)
185 | xys = np.column_stack([tuple(map(float, x_y_id_s[0::3])),
186 | tuple(map(float, x_y_id_s[1::3]))])
187 | point3D_ids = np.array(tuple(map(int, x_y_id_s[2::3])))
188 | images[image_id] = Image(
189 | id=image_id, qvec=qvec, tvec=tvec,
190 | camera_id=camera_id, name=image_name,
191 | xys=xys, point3D_ids=point3D_ids)
192 | return images
193 |
194 |
195 | def read_points3D_text(path):
196 | """
197 | see: src/base/reconstruction.cc
198 | void Reconstruction::ReadPoints3DText(const std::string& path)
199 | void Reconstruction::WritePoints3DText(const std::string& path)
200 | """
201 | points3D = {}
202 | with open(path, "r") as fid:
203 | while True:
204 | line = fid.readline()
205 | if not line:
206 | break
207 | line = line.strip()
208 | if len(line) > 0 and line[0] != "#":
209 | elems = line.split()
210 | point3D_id = int(elems[0])
211 | xyz = np.array(tuple(map(float, elems[1:4])))
212 | rgb = np.array(tuple(map(int, elems[4:7])))
213 | error = float(elems[7])
214 | image_ids = np.array(tuple(map(int, elems[8::2])))
215 | point2D_idxs = np.array(tuple(map(int, elems[9::2])))
216 | points3D[point3D_id] = Point3D(id=point3D_id, xyz=xyz, rgb=rgb,
217 | error=error, image_ids=image_ids,
218 | point2D_idxs=point2D_idxs)
219 | return points3D
220 |
221 |
222 | def read_points3D_binary(path_to_model_file):
223 | """
224 | see: src/base/reconstruction.cc
225 | void Reconstruction::ReadPoints3DBinary(const std::string& path)
226 | void Reconstruction::WritePoints3DBinary(const std::string& path)
227 | """
228 | points3D = {}
229 | with open(path_to_model_file, "rb") as fid:
230 | num_points = read_next_bytes(fid, 8, "Q")[0]
231 | for _ in range(num_points):
232 | binary_point_line_properties = read_next_bytes(
233 | fid, num_bytes=43, format_char_sequence="QdddBBBd")
234 | point3D_id = binary_point_line_properties[0]
235 | xyz = np.array(binary_point_line_properties[1:4])
236 | rgb = np.array(binary_point_line_properties[4:7])
237 | error = np.array(binary_point_line_properties[7])
238 | track_length = read_next_bytes(
239 | fid, num_bytes=8, format_char_sequence="Q")[0]
240 | track_elems = read_next_bytes(
241 | fid, num_bytes=8 * track_length,
242 | format_char_sequence="ii" * track_length)
243 | image_ids = np.array(tuple(map(int, track_elems[0::2])))
244 | point2D_idxs = np.array(tuple(map(int, track_elems[1::2])))
245 | points3D[point3D_id] = Point3D(
246 | id=point3D_id, xyz=xyz, rgb=rgb,
247 | error=error, image_ids=image_ids,
248 | point2D_idxs=point2D_idxs)
249 | return points3D
250 |
251 |
252 | def detect_model_format(path, ext):
253 | if os.path.isfile(os.path.join(path, "cameras" + ext)) and \
254 | os.path.isfile(os.path.join(path, "images" + ext)) and \
255 | os.path.isfile(os.path.join(path, "points3D" + ext)):
256 | print("Detected model format: '" + ext + "'")
257 | return True
258 |
259 | return False
260 |
261 |
262 | def read_model(path, ext=""):
263 | # try to detect the extension automatically
264 | if ext == "":
265 | if detect_model_format(path, ".bin"):
266 | ext = ".bin"
267 | elif detect_model_format(path, ".txt"):
268 | ext = ".txt"
269 | else:
270 | print("Provide model format: '.bin' or '.txt'")
271 | return
272 |
273 | if ext == ".txt":
274 | cameras = read_cameras_text(os.path.join(path, "cameras" + ext))
275 | images = read_images_text(os.path.join(path, "images" + ext))
276 | points3D = read_points3D_text(os.path.join(path, "points3D") + ext)
277 | else:
278 | cameras = read_cameras_binary(os.path.join(path, "cameras" + ext))
279 | images = read_images_binary(os.path.join(path, "images" + ext))
280 | points3D = read_points3D_binary(os.path.join(path, "points3D") + ext)
281 | return cameras, images, points3D
282 |
283 |
284 | def qvec2rotmat(qvec):
285 | return np.array([
286 | [1 - 2 * qvec[2] ** 2 - 2 * qvec[3] ** 2,
287 | 2 * qvec[1] * qvec[2] - 2 * qvec[0] * qvec[3],
288 | 2 * qvec[3] * qvec[1] + 2 * qvec[0] * qvec[2]],
289 | [2 * qvec[1] * qvec[2] + 2 * qvec[0] * qvec[3],
290 | 1 - 2 * qvec[1] ** 2 - 2 * qvec[3] ** 2,
291 | 2 * qvec[2] * qvec[3] - 2 * qvec[0] * qvec[1]],
292 | [2 * qvec[3] * qvec[1] - 2 * qvec[0] * qvec[2],
293 | 2 * qvec[2] * qvec[3] + 2 * qvec[0] * qvec[1],
294 | 1 - 2 * qvec[1] ** 2 - 2 * qvec[2] ** 2]])
295 |
296 |
297 | def rotmat2qvec(R):
298 | Rxx, Ryx, Rzx, Rxy, Ryy, Rzy, Rxz, Ryz, Rzz = R.flat
299 | K = np.array([
300 | [Rxx - Ryy - Rzz, 0, 0, 0],
301 | [Ryx + Rxy, Ryy - Rxx - Rzz, 0, 0],
302 | [Rzx + Rxz, Rzy + Ryz, Rzz - Rxx - Ryy, 0],
303 | [Ryz - Rzy, Rzx - Rxz, Rxy - Ryx, Rxx + Ryy + Rzz]]) / 3.0
304 | eigvals, eigvecs = np.linalg.eigh(K)
305 | qvec = eigvecs[[3, 0, 1, 2], np.argmax(eigvals)]
306 | if qvec[0] < 0:
307 | qvec *= -1
308 | return qvec
309 |
310 |
311 | def convert_to_blender_coord(tvec_w2c, qvec_w2c):
312 | cv2blender = np.array([[1, 0, 0],
313 | [0, -1, 0],
314 | [0, 0, -1]])
315 | R = qvec2rotmat(qvec_w2c)
316 | tvec_blender = -np.dot(R.T, tvec_w2c)
317 | rotation = np.dot(R.T, cv2blender)
318 | qvec_blender = rotmat2qvec(rotation)
319 | return tvec_blender, qvec_blender
320 |
321 |
322 | # ------------------------------------------------------------------------
323 | # Borrowed from BlenderProc:
324 | # https://github.com/DLR-RM/BlenderProc
325 | # ------------------------------------------------------------------------
326 | def set_intrinsics_from_K_matrix(K: Union[np.ndarray, Matrix], image_width: int, image_height: int,
327 | clip_start: float = None, clip_end: float = None):
328 | """ Set the camera intrinsics via a K matrix.
329 | The K matrix should have the format:
330 | [[fx, 0, cx],
331 | [0, fy, cy],
332 | [0, 0, 1]]
333 | This method is based on https://blender.stackexchange.com/a/120063.
334 | :param K: The 3x3 K matrix.
335 | :param image_width: The image width in pixels.
336 | :param image_height: The image height in pixels.
337 | :param clip_start: Clipping start.
338 | :param clip_end: Clipping end.
339 | """
340 |
341 | K = Matrix(K)
342 |
343 | cam = bpy.context.scene.objects['Input Camera'].data
344 |
345 | if abs(K[0][1]) > 1e-7:
346 | raise ValueError(f"Skew is not supported by blender and therefore "
347 | f"not by BlenderProc, set this to zero: {K[0][1]} and recalibrate")
348 |
349 | fx, fy = K[0][0], K[1][1]
350 | cx, cy = K[0][2], K[1][2]
351 |
352 | # If fx!=fy change pixel aspect ratio
353 | pixel_aspect_x = pixel_aspect_y = 1
354 | if fx > fy:
355 | pixel_aspect_y = fx / fy
356 | elif fx < fy:
357 | pixel_aspect_x = fy / fx
358 |
359 | # Compute sensor size in mm and view in px
360 | pixel_aspect_ratio = pixel_aspect_y / pixel_aspect_x
361 | view_fac_in_px = get_view_fac_in_px(cam, pixel_aspect_x, pixel_aspect_y, image_width, image_height)
362 | sensor_size_in_mm = get_sensor_size(cam)
363 |
364 | # Convert focal length in px to focal length in mm
365 | f_in_mm = fx * sensor_size_in_mm / view_fac_in_px
366 |
367 | # Convert principal point in px to blenders internal format
368 | shift_x = (cx - (image_width - 1) / 2) / -view_fac_in_px
369 | shift_y = (cy - (image_height - 1) / 2) / view_fac_in_px * pixel_aspect_ratio
370 |
371 | # Finally set all intrinsics
372 | set_intrinsics_from_blender_params(f_in_mm, image_width, image_height, clip_start, clip_end, pixel_aspect_x,
373 | pixel_aspect_y, shift_x, shift_y, "MILLIMETERS")
374 |
375 |
376 | def get_sensor_size(cam: bpy.types.Camera) -> float:
377 | """ Returns the sensor size in millimeters based on the configured sensor_fit.
378 | :param cam: The camera object.
379 | :return: The sensor size in millimeters.
380 | """
381 | if cam.sensor_fit == 'VERTICAL':
382 | sensor_size_in_mm = cam.sensor_height
383 | else:
384 | sensor_size_in_mm = cam.sensor_width
385 | return sensor_size_in_mm
386 |
387 |
388 | def get_view_fac_in_px(cam: bpy.types.Camera, pixel_aspect_x: float, pixel_aspect_y: float,
389 | resolution_x_in_px: int, resolution_y_in_px: int) -> int:
390 | """ Returns the camera view in pixels.
391 | :param cam: The camera object.
392 | :param pixel_aspect_x: The pixel aspect ratio along x.
393 | :param pixel_aspect_y: The pixel aspect ratio along y.
394 | :param resolution_x_in_px: The image width in pixels.
395 | :param resolution_y_in_px: The image height in pixels.
396 | :return: The camera view in pixels.
397 | """
398 | # Determine the sensor fit mode to use
399 | if cam.sensor_fit == 'AUTO':
400 | if pixel_aspect_x * resolution_x_in_px >= pixel_aspect_y * resolution_y_in_px:
401 | sensor_fit = 'HORIZONTAL'
402 | else:
403 | sensor_fit = 'VERTICAL'
404 | else:
405 | sensor_fit = cam.sensor_fit
406 |
407 | # Based on the sensor fit mode, determine the view in pixels
408 | pixel_aspect_ratio = pixel_aspect_y / pixel_aspect_x
409 | if sensor_fit == 'HORIZONTAL':
410 | view_fac_in_px = resolution_x_in_px
411 | else:
412 | view_fac_in_px = pixel_aspect_ratio * resolution_y_in_px
413 |
414 | return view_fac_in_px
415 |
416 |
417 | def set_intrinsics_from_blender_params(lens: float = None, image_width: int = None, image_height: int = None,
418 | clip_start: float = None, clip_end: float = None,
419 | pixel_aspect_x: float = None, pixel_aspect_y: float = None, shift_x: int = None,
420 | shift_y: int = None, lens_unit: str = None):
421 | """ Sets the camera intrinsics using blenders represenation.
422 | :param lens: Either the focal length in millimeters or the FOV in radians, depending on the given lens_unit.
423 | :param image_width: The image width in pixels.
424 | :param image_height: The image height in pixels.
425 | :param clip_start: Clipping start.
426 | :param clip_end: Clipping end.
427 | :param pixel_aspect_x: The pixel aspect ratio along x.
428 | :param pixel_aspect_y: The pixel aspect ratio along y.
429 | :param shift_x: The shift in x direction.
430 | :param shift_y: The shift in y direction.
431 | :param lens_unit: Either FOV or MILLIMETERS depending on whether the lens is defined as focal length in
432 | millimeters or as FOV in radians.
433 | """
434 |
435 | cam = bpy.context.scene.objects['Input Camera'].data
436 |
437 | if lens_unit is not None:
438 | cam.lens_unit = lens_unit
439 |
440 | if lens is not None:
441 | # Set focal length
442 | if cam.lens_unit == 'MILLIMETERS':
443 | if lens < 1:
444 | raise Exception("The focal length is smaller than 1mm which is not allowed in blender: " + str(lens))
445 | cam.lens = lens
446 | elif cam.lens_unit == "FOV":
447 | cam.angle = lens
448 | else:
449 | raise Exception("No such lens unit: " + lens_unit)
450 |
451 | # Set resolution
452 | if image_width is not None:
453 | bpy.context.scene.render.resolution_x = image_width
454 | if image_height is not None:
455 | bpy.context.scene.render.resolution_y = image_height
456 |
457 | # Set clipping
458 | if clip_start is not None:
459 | cam.clip_start = clip_start
460 | if clip_end is not None:
461 | cam.clip_end = clip_end
462 |
463 | # Set aspect ratio
464 | if pixel_aspect_x is not None:
465 | bpy.context.scene.render.pixel_aspect_x = pixel_aspect_x
466 | if pixel_aspect_y is not None:
467 | bpy.context.scene.render.pixel_aspect_y = pixel_aspect_y
468 |
469 | # Set shift
470 | if shift_x is not None:
471 | cam.shift_x = shift_x
472 | if shift_y is not None:
473 | cam.shift_y = shift_y
474 |
475 |
476 | # ------------------------------------------------------------------------
477 | # AddOn code:
478 | # useful tutorial: https://blender.stackexchange.com/questions/57306/how-to-create-a-custom-ui
479 | # ------------------------------------------------------------------------
480 |
481 | # bl_info
482 | bl_info = {
483 | "name": "BlenderNeuralangelo",
484 | "version": (1, 0),
485 | "blender": (3, 3, 1),
486 | "location": "PROPERTIES",
487 | "warning": "", # used for warning icon and text in addons panel
488 | "support": "COMMUNITY",
489 | "category": "Interface"
490 | }
491 |
492 | # global variables for easier access
493 | colmap_data = None
494 | old_box_offset = [0, 0, 0, 0, 0, 0]
495 | view_port = None
496 | point_cloud_vertices = None
497 | select_point_index = []
498 | radius = 0
499 | center = (0, 0, 0)
500 | bounding_box = []
501 |
502 |
503 | # ------------------------------------------------------------------------
504 | # Utility scripts
505 | # ------------------------------------------------------------------------
506 |
507 | def display_pointcloud(points3D):
508 | '''
509 | load and display point cloud
510 | borrowed from https://github.com/TombstoneTumbleweedArt/import-ply-as-verts
511 | '''
512 |
513 | xyzs = np.stack([point.xyz for point in points3D.values()])
514 | rgbs = np.stack([point.rgb for point in points3D.values()]) # / 255.0
515 |
516 | # Copy the positions
517 | ply_name = 'Point Cloud'
518 | mesh = bpy.data.meshes.new(name=ply_name)
519 | mesh.vertices.add(xyzs.shape[0])
520 | mesh.vertices.foreach_set("co", [a for v in xyzs for a in v])
521 | obj = bpy.data.objects.new(ply_name, mesh)
522 | bpy.context.scene.collection.objects.link(obj)
523 |
524 |
525 | def generate_cropping_planes():
526 | global point_cloud_vertices
527 |
528 | max_coordinate = np.max(point_cloud_vertices, axis=0)
529 | min_coordinate = np.min(point_cloud_vertices, axis=0)
530 |
531 | x_min = min_coordinate[0]
532 | x_max = max_coordinate[0]
533 | y_min = min_coordinate[1]
534 | y_max = max_coordinate[1]
535 | z_min = min_coordinate[2]
536 | z_max = max_coordinate[2]
537 |
538 | verts = [[x_max, y_max, z_min],
539 | [x_max, y_min, z_min],
540 | [x_min, y_min, z_min],
541 | [x_min, y_max, z_min],
542 | [x_max, y_max, z_max],
543 | [x_max, y_min, z_max],
544 | [x_min, y_min, z_max],
545 | [x_min, y_max, z_max]]
546 |
547 | faces = [[0, 1, 5, 4],
548 | [3, 2, 6, 7],
549 | [0, 3, 7, 4],
550 | [1, 2, 6, 5],
551 | [0, 1, 2, 3],
552 | [4, 5, 6, 7]]
553 |
554 | msh = bpy.data.meshes.new('Bounding Box')
555 | msh.from_pydata(verts, [], faces)
556 | obj = bpy.data.objects.new('Bounding Box', msh)
557 | bpy.context.scene.collection.objects.link(obj)
558 | bpy.context.scene.objects['Bounding Box'].hide_set(True)
559 |
560 | # Add plane text
561 | text_object_xmin = bpy.data.objects.new("x_min_label", bpy.data.curves.new(type="FONT", name="x_min"))
562 | text_object_xmin.data.body = "x min"
563 | text_object_xmin.data.size *= 2
564 | bpy.context.scene.collection.objects.link(text_object_xmin)
565 | bpy.context.scene.objects['x_min_label'].hide_set(True)
566 |
567 | text_object_xmax = bpy.data.objects.new("x_max_label", bpy.data.curves.new(type="FONT", name="x_max"))
568 | text_object_xmax.data.body = "x max"
569 | text_object_xmax.data.size *= 2
570 | bpy.context.scene.collection.objects.link(text_object_xmax)
571 | bpy.context.scene.objects['x_max_label'].hide_set(True)
572 |
573 | text_object_ymin = bpy.data.objects.new("y_min_label", bpy.data.curves.new(type="FONT", name="y_min"))
574 | text_object_ymin.data.body = "y min"
575 | text_object_ymin.data.size *= 2
576 | bpy.context.scene.collection.objects.link(text_object_ymin)
577 | bpy.context.scene.objects['y_min_label'].hide_set(True)
578 |
579 | text_object_ymax = bpy.data.objects.new("y_max_label", bpy.data.curves.new(type="FONT", name="y_max"))
580 | text_object_ymax.data.body = "y max"
581 | text_object_ymax.data.size *= 2
582 | bpy.context.scene.collection.objects.link(text_object_ymax)
583 | bpy.context.scene.objects['y_max_label'].hide_set(True)
584 |
585 | text_object_zmin = bpy.data.objects.new("z_min_label", bpy.data.curves.new(type="FONT", name="z_min"))
586 | text_object_zmin.data.body = "z min"
587 | text_object_zmin.data.size *= 2
588 | bpy.context.scene.collection.objects.link(text_object_zmin)
589 | bpy.context.scene.objects['z_min_label'].hide_set(True)
590 |
591 | text_object_zmax = bpy.data.objects.new("z_max_label", bpy.data.curves.new(type="FONT", name="z_max"))
592 | text_object_zmax.data.body = "z max"
593 | text_object_zmax.data.size *= 2
594 | bpy.context.scene.collection.objects.link(text_object_zmax)
595 | bpy.context.scene.objects['z_max_label'].hide_set(True)
596 |
597 | text_object_xmin.rotation_euler = (-math.radians(90), -math.radians(90), math.radians(90))
598 | text_object_xmax.rotation_euler = (-math.radians(90), -math.radians(90), -math.radians(90))
599 |
600 | text_object_ymin.rotation_euler = (-math.radians(90), 0, math.radians(180))
601 | text_object_ymax.rotation_euler = (-math.radians(90), 0, 0)
602 |
603 | text_object_zmin.rotation_euler = (-math.radians(180), 0, 0)
604 | text_object_zmax.rotation_euler = (0, 0, 0)
605 |
606 | text_object_xmin.location = (x_min - 1, (y_max + y_min) / 2, (z_max + z_min) / 2)
607 | text_object_xmax.location = (x_max + 0.5, (y_max + y_min) / 2, (z_max + z_min) / 2)
608 |
609 | text_object_ymin.location = ((x_max + x_min) / 2, y_min - 1, (z_max + z_min) / 2)
610 | text_object_ymax.location = ((x_max + x_min) / 2, y_max + 0.5, (z_max + z_min) / 2)
611 |
612 | text_object_zmin.location = ((x_max + x_min) / 2, (y_max + y_min) / 2, z_min - 1)
613 | text_object_zmax.location = ((x_max + x_min) / 2, (y_max + y_min) / 2, z_max + 0.5)
614 |
615 | return
616 |
617 |
618 | def set_plane_location(x_max, x_min, y_max, y_min, z_max, z_min):
619 | crop_plane = bpy.data.objects['Bounding Box']
620 | text_object_xmin = bpy.data.objects['x_min_label']
621 | text_object_xmax = bpy.data.objects['x_max_label']
622 | text_object_ymin = bpy.data.objects['y_min_label']
623 | text_object_ymax = bpy.data.objects['y_max_label']
624 | text_object_zmin = bpy.data.objects['z_min_label']
625 | text_object_zmax = bpy.data.objects['z_max_label']
626 |
627 | crop_plane.data.vertices[0].co.x = x_max
628 | crop_plane.data.vertices[0].co.y = y_max
629 | crop_plane.data.vertices[0].co.z = z_min
630 |
631 | crop_plane.data.vertices[1].co.x = x_max
632 | crop_plane.data.vertices[1].co.y = y_min
633 | crop_plane.data.vertices[1].co.z = z_min
634 |
635 | crop_plane.data.vertices[2].co.x = x_min
636 | crop_plane.data.vertices[2].co.y = y_min
637 | crop_plane.data.vertices[2].co.z = z_min
638 |
639 | crop_plane.data.vertices[3].co.x = x_min
640 | crop_plane.data.vertices[3].co.y = y_max
641 | crop_plane.data.vertices[3].co.z = z_min
642 |
643 | crop_plane.data.vertices[4].co.x = x_max
644 | crop_plane.data.vertices[4].co.y = y_max
645 | crop_plane.data.vertices[4].co.z = z_max
646 |
647 | crop_plane.data.vertices[5].co.x = x_max
648 | crop_plane.data.vertices[5].co.y = y_min
649 | crop_plane.data.vertices[5].co.z = z_max
650 |
651 | crop_plane.data.vertices[6].co.x = x_min
652 | crop_plane.data.vertices[6].co.y = y_min
653 | crop_plane.data.vertices[6].co.z = z_max
654 |
655 | crop_plane.data.vertices[7].co.x = x_min
656 | crop_plane.data.vertices[7].co.y = y_max
657 | crop_plane.data.vertices[7].co.z = z_max
658 |
659 | # update text location and rotation
660 | text_width = bpy.context.scene.objects['x_min_label'].dimensions[0] / 2
661 |
662 | text_object_xmin.location = (x_min - 1, (y_max + y_min) / 2, (z_max + z_min) / 2 - text_width)
663 | text_object_xmax.location = (x_max + 0.5, (y_max + y_min) / 2, (z_max + z_min) / 2 - text_width)
664 |
665 | text_object_ymin.location = ((x_max + x_min) / 2 + text_width, y_min - 1, (z_max + z_min) / 2)
666 | text_object_ymax.location = ((x_max + x_min) / 2 - text_width, y_max + 0.5, (z_max + z_min) / 2)
667 |
668 | text_object_zmin.location = ((x_max + x_min) / 2 - text_width, (y_max + y_min) / 2, z_min - 1)
669 | text_object_zmax.location = ((x_max + x_min) / 2 - text_width, (y_max + y_min) / 2, z_max + 1)
670 |
671 |
672 | def update_cropping_plane(self, context):
673 | global old_box_offset
674 | global point_cloud_vertices
675 |
676 | if point_cloud_vertices is None: # stop if point cloud vertices are not yet loaded
677 | return
678 |
679 | max_coordinate = np.max(point_cloud_vertices, axis=0)
680 | min_coordinate = np.min(point_cloud_vertices, axis=0)
681 |
682 | x_min = min_coordinate[0]
683 | x_max = max_coordinate[0]
684 | y_min = min_coordinate[1]
685 | y_max = max_coordinate[1]
686 | z_min = min_coordinate[2]
687 | z_max = max_coordinate[2]
688 |
689 | slider = bpy.context.scene.my_tool.box_slider
690 |
691 | x_min_change = -slider[0]
692 | x_max_change = -slider[1]
693 | y_min_change = -slider[2]
694 | y_max_change = -slider[3]
695 | z_min_change = -slider[4]
696 | z_max_change = -slider[5]
697 |
698 | if -x_min_change != old_box_offset[0] and x_max + x_max_change < x_min - x_min_change:
699 | x_min_change = x_min - (x_max + x_max_change)
700 | slider[0] = old_box_offset[0]
701 |
702 | elif -x_max_change != old_box_offset[1] and x_max + x_max_change < x_min - x_min_change:
703 | x_max_change = x_min - x_min_change - x_max
704 | slider[1] = old_box_offset[1]
705 |
706 | elif -y_min_change != old_box_offset[2] and y_max + y_max_change < y_min - y_min_change:
707 | y_min_change = y_min - (y_max + y_max_change)
708 | slider[2] = old_box_offset[2]
709 |
710 | elif -y_max_change != old_box_offset[3] and y_max + y_max_change < y_min - y_min_change:
711 | y_max_change = y_min - y_min_change - y_max
712 | slider[3] = old_box_offset[3]
713 |
714 | elif -z_min_change != old_box_offset[4] and z_max + z_max_change < z_min - z_min_change:
715 | z_min_change = z_min - (z_max + z_max_change)
716 | slider[4] = old_box_offset[4]
717 |
718 | elif -z_max_change != old_box_offset[5] and z_max + z_max_change < z_min - z_min_change:
719 | z_max_change = z_min - z_min_change - z_max
720 | slider[5] = old_box_offset[5]
721 |
722 | old_box_offset = [n for n in slider]
723 |
724 | set_plane_location(x_max + x_max_change, x_min - x_min_change, y_max + y_max_change, y_min - y_min_change,
725 | z_max + z_max_change, z_min - z_min_change)
726 |
727 |
728 | def reset_my_slider_to_default():
729 | bpy.context.scene.my_tool.box_slider[0] = 0
730 | bpy.context.scene.my_tool.box_slider[1] = 0
731 | bpy.context.scene.my_tool.box_slider[2] = 0
732 | bpy.context.scene.my_tool.box_slider[3] = 0
733 | bpy.context.scene.my_tool.box_slider[4] = 0
734 | bpy.context.scene.my_tool.box_slider[5] = 0
735 |
736 |
737 | def delete_bounding_sphere():
738 | if 'Bounding Sphere' in bpy.data.objects:
739 | obj = bpy.context.scene.objects['Bounding Sphere']
740 | bpy.data.meshes.remove(obj.data, do_unlink=True)
741 |
742 |
743 | # TODO: can this be cleaned up??
744 | # TODO: when loading, not set to solid mode??
745 | def switch_viewport_to_solid(self, context):
746 | toggle = context.scene.my_tool.transparency_toggle
747 | for area in bpy.context.screen.areas:
748 | if area.type == 'VIEW_3D':
749 | for space in area.spaces:
750 | if space.type == 'VIEW_3D':
751 | space.shading.type = 'SOLID'
752 | space.shading.show_xray = toggle
753 |
754 |
755 | def enable_texture_mode():
756 | # change color mode
757 | for area in bpy.context.screen.areas:
758 | if area.type == 'VIEW_3D':
759 | for space in area.spaces:
760 | if space.type == 'VIEW_3D':
761 | space.shading.color_type = 'TEXTURE'
762 |
763 |
764 | def update_transparency(self, context):
765 | for area in bpy.context.screen.areas:
766 | if area.type == 'VIEW_3D':
767 | for space in area.spaces:
768 | if space.type == 'VIEW_3D':
769 | alpha = context.scene.my_tool.transparency_slider
770 | space.shading.xray_alpha = alpha
771 |
772 |
773 | def set_keyframe_camera(camera, qvec_w2c, tvec_w2c, idx, inter_frames=1):
774 | # Set rotation and translation of Camera in each frame
775 |
776 | tvec, qvec = convert_to_blender_coord(tvec_w2c, qvec_w2c)
777 |
778 | camera.rotation_quaternion = qvec
779 | camera.location = tvec
780 |
781 | camera.keyframe_insert(data_path='location', frame=idx * inter_frames)
782 | camera.keyframe_insert(data_path='rotation_quaternion', frame=idx * inter_frames)
783 |
784 |
785 | def set_keyframe_image(idx, plane, inter_frames=1):
786 | # Set vertices of image plane in each frame
787 | bpy.context.view_layer.update()
788 |
789 | # Set image texture of image plane in each frame
790 | material = plane.material_slots[0].material
791 | texture = material.node_tree.nodes.get("Image Texture")
792 | texture.image_user.frame_offset = idx - 1
793 | texture.image_user.keyframe_insert(data_path="frame_offset", frame=idx * inter_frames)
794 |
795 |
796 | def select_all_vert(obj_name):
797 | if obj_name in bpy.data.objects:
798 | obj = bpy.context.scene.objects[obj_name]
799 | bpy.context.view_layer.objects.active = obj
800 | bpy.ops.object.mode_set(mode='EDIT')
801 | bpy.ops.mesh.select_mode(type="VERT")
802 | bpy.ops.mesh.select_all(action='SELECT')
803 |
804 |
805 | def generate_camera_plane(camera, image_width, image_height, intrinsic_matrix):
806 | if 'Image Plane' in bpy.data.objects:
807 | obj = bpy.context.scene.objects['Image Plane']
808 | bpy.data.meshes.remove(obj.data, do_unlink=True)
809 |
810 | bpy.context.view_layer.update()
811 |
812 | # create a plane with 4 corners
813 | verts = camera.data.view_frame()
814 | faces = [[0, 1, 2, 3]]
815 | msh = bpy.data.meshes.new('Image Plane')
816 | msh.from_pydata(verts, [], faces)
817 | obj = bpy.data.objects.new('Image Plane', msh)
818 | bpy.context.scene.collection.objects.link(obj)
819 |
820 | plane = bpy.context.scene.objects['Image Plane']
821 | bpy.context.view_layer.objects.active = plane
822 | bpy.ops.object.mode_set(mode='EDIT')
823 | bpy.ops.mesh.select_all(action='SELECT')
824 | bpy.ops.uv.unwrap(method='ANGLE_BASED', margin=0)
825 |
826 | # change each uv vertex
827 | bm = bmesh.from_edit_mesh(plane.data)
828 | uv_layer = bm.loops.layers.uv.active
829 | for idx, v in enumerate(bm.verts):
830 | for l in v.link_loops:
831 | uv_data = l[uv_layer]
832 | if idx == 0:
833 | uv_data.uv[0] = 0.0
834 | uv_data.uv[1] = 0.0
835 | elif idx == 1:
836 | uv_data.uv[0] = 0.0
837 | uv_data.uv[1] = 1.0
838 | elif idx == 2:
839 | uv_data.uv[0] = 1.0
840 | uv_data.uv[1] = 1.0
841 | elif idx == 3:
842 | uv_data.uv[0] = 1.0
843 | uv_data.uv[1] = 0.0
844 | break
845 |
846 | bpy.ops.object.mode_set(mode='OBJECT')
847 |
848 | plane.parent = camera
849 | # set plane vertex location
850 | camera_vert_origin = camera.data.view_frame()
851 | corners = np.array([
852 | [0, 0, 1],
853 | [0, image_height, 1],
854 | [image_width, image_height, 1],
855 | [image_width, 0, 1]
856 | ])
857 | corners_3D = corners @ (np.linalg.inv(intrinsic_matrix).transpose(-1, -2))
858 | for vert, corner in zip(camera_vert_origin, corners_3D):
859 | vert[0] = corner[0]
860 | vert[1] = corner[1]
861 | vert[2] = -1.0 # blender coord
862 |
863 | for i in range(4):
864 | plane.data.vertices[i].co = camera_vert_origin[i]
865 |
866 |
867 | def copy_all_images(blender_img_path, image_folder_path, image_names, sort_image_id):
868 | file_names = []
869 | for idx, sorted_idx in enumerate(sort_image_id):
870 | shutil.copyfile(image_folder_path + image_names[sorted_idx],
871 | blender_img_path + '%05d.' % (idx + 1) + image_names[sorted_idx].split('.')[-1])
872 | file_names.append('%05d.' % (idx + 1) + image_names[sorted_idx].split('.')[-1])
873 | return file_names
874 |
875 |
876 | def generate_camera_plane_texture(image_sequence):
877 | plane = bpy.context.scene.objects['Image Plane']
878 | if 'Image Material' not in bpy.data.materials:
879 | material = bpy.data.materials.new(name="Image Material")
880 | else:
881 | material = bpy.data.materials["Image Material"]
882 |
883 | if len(plane.material_slots) == 0:
884 | plane.data.materials.append(material)
885 |
886 | material = plane.active_material
887 | material.use_nodes = True
888 |
889 | image_texture = material.node_tree.nodes.new(type='ShaderNodeTexImage')
890 | principled_bsdf = material.node_tree.nodes.get('Principled BSDF')
891 | material.node_tree.links.new(image_texture.outputs['Color'], principled_bsdf.inputs['Base Color'])
892 |
893 | image_texture.image = image_sequence
894 | image_texture.image_user.use_cyclic = True
895 | image_texture.image_user.use_auto_refresh = True
896 | image_texture.image_user.frame_duration = 1
897 | image_texture.image_user.frame_start = 0
898 | image_texture.image_user.frame_offset = 0
899 |
900 |
901 | def load_camera(colmap_data, context):
902 | if 'Input Camera' in bpy.data.cameras:
903 | camera = bpy.data.cameras['Input Camera']
904 | bpy.data.cameras.remove(camera)
905 |
906 | if 'Image Material' in bpy.data.materials:
907 | material = bpy.data.materials['Image Material']
908 | bpy.data.materials.remove(material, do_unlink=True)
909 |
910 | # Load colmap data
911 | intrinsic_param = np.array([camera.params for camera in colmap_data['cameras'].values()])
912 | intrinsic_matrix = np.array([[intrinsic_param[0][0], 0, intrinsic_param[0][2]],
913 | [0, intrinsic_param[0][1], intrinsic_param[0][3]],
914 | [0, 0, 1]]) # TODO: only supports single camera for now
915 |
916 | image_width = np.array([camera.width for camera in colmap_data['cameras'].values()])
917 | image_height = np.array([camera.height for camera in colmap_data['cameras'].values()])
918 | image_quaternion = np.stack([img.qvec for img in colmap_data['images'].values()])
919 | image_translation = np.stack([img.tvec for img in colmap_data['images'].values()])
920 | camera_id = np.stack([img.camera_id for img in colmap_data['images'].values()]) - 1 # make it zero-indexed
921 | image_names = np.stack([img.name for img in colmap_data['images'].values()])
922 | num_image = image_names.shape[0]
923 |
924 | # set start and end frame
925 | context.scene.frame_start = 1
926 | context.scene.frame_end = num_image
927 |
928 | # Load image file
929 | sort_image_id = np.argsort(image_names)
930 | image_folder_path = bpy.path.abspath(bpy.context.scene.my_tool.colmap_path + 'images/')
931 |
932 | ## make a copy of images to comply with the continuous numbering requirement of sequence
933 | blender_img_path = bpy.context.scene.my_tool.colmap_path + 'blender_images/'
934 | if os.path.isdir(blender_img_path):
935 | if os.listdir(blender_img_path):
936 | blender_file_names = sorted(os.listdir(blender_img_path))
937 | else:
938 | blender_file_names = copy_all_images(blender_img_path, image_folder_path, image_names, sort_image_id)
939 | else:
940 | os.mkdir(blender_img_path)
941 | blender_file_names = copy_all_images(blender_img_path, image_folder_path, image_names, sort_image_id)
942 |
943 | blender_file_names_formatted = [{"name": file_name} for file_name in blender_file_names]
944 | bpy.ops.image.open(filepath=blender_img_path, directory=blender_img_path, files=blender_file_names_formatted,
945 | relative_path=True, show_multiview=False)
946 |
947 | ## sequence named after the first image filename
948 | image_sequence = bpy.data.images[blender_file_names_formatted[0]['name']]
949 | image_sequence.source = 'SEQUENCE'
950 |
951 | # Camera initialization
952 | camera_data = bpy.data.cameras.new(name="Input Camera")
953 | camera_object = bpy.data.objects.new(name="Input Camera", object_data=camera_data)
954 | bpy.context.scene.collection.objects.link(camera_object)
955 | bpy.data.objects['Input Camera'].rotation_mode = 'QUATERNION'
956 |
957 | set_intrinsics_from_K_matrix(intrinsic_matrix, int(image_width[0]),
958 | int(image_height[0])) # set intrinsic matrix
959 | camera = bpy.context.scene.objects['Input Camera']
960 |
961 | # Image Plane Setting
962 | generate_camera_plane(camera, int(image_width[0]), int(image_height[0]), intrinsic_matrix) # create plane
963 | generate_camera_plane_texture(image_sequence)
964 |
965 | # Setting Camera & Image Plane frame data
966 | plane = bpy.context.scene.objects['Image Plane']
967 | for idx, (i_id, c_id) in enumerate(zip(sort_image_id, camera_id)):
968 | frame_id = idx + 1 # one-indexed
969 | set_keyframe_camera(camera, image_quaternion[i_id], image_translation[i_id], frame_id)
970 | set_keyframe_image(frame_id, plane)
971 |
972 | # enable texture mode to visualize images
973 | enable_texture_mode()
974 |
975 | # keep point cloud highlighted
976 | select_all_vert('Point Cloud')
977 |
978 | return
979 |
980 |
981 | def update_depth(self, context):
982 | depth_coef = bpy.context.scene.my_tool.imagedepth_slider
983 | scale_coef = 1 + depth_coef
984 | camera = bpy.context.scene.objects['Input Camera']
985 | camera.scale = (scale_coef, scale_coef, scale_coef)
986 |
987 |
988 | # ------------------------------------------------------------------------
989 | # Scene Properties
990 | # ------------------------------------------------------------------------
991 |
992 | class MyProperties(PropertyGroup):
993 | '''
994 | slider bar, path, and everything else ....
995 | '''
996 | colmap_path: StringProperty(
997 | name="Directory",
998 | description="Choose a directory:",
999 | default="",
1000 | maxlen=1024,
1001 | subtype='DIR_PATH'
1002 | )
1003 | box_slider: FloatVectorProperty(
1004 | name="Plane offset",
1005 | subtype='TRANSLATION',
1006 | description="X_min, X_max ,Y_min ,Y_max ,Z_min ,Z_max",
1007 | size=6,
1008 | min=0,
1009 | max=50,
1010 | default=(0, 0, 0, 0, 0, 0),
1011 | update=update_cropping_plane
1012 | )
1013 | transparency_slider: FloatProperty(
1014 | name="Transparency",
1015 | description="Transparency",
1016 | min=0,
1017 | max=1,
1018 | default=0.1,
1019 | update=update_transparency
1020 | )
1021 | transparency_toggle: BoolProperty(
1022 | name="",
1023 | description="Toggle transparency",
1024 | default=False,
1025 | update=switch_viewport_to_solid
1026 | )
1027 | imagedepth_slider: FloatProperty(
1028 | name="Image Depth",
1029 | description="Depth",
1030 | min=0,
1031 | max=15,
1032 | default=0,
1033 | update=update_depth
1034 | )
1035 |
1036 |
1037 | # ------------------------------------------------------------------------
1038 | # Operators, i.e, buttons + callback
1039 | # ------------------------------------------------------------------------
1040 |
1041 |
1042 | class LoadCOLMAP(Operator):
1043 | '''
1044 | Load colmap data given file directory, setting up bounding box, set camera parameters and load images
1045 | '''
1046 | bl_label = "Load COLMAP Data"
1047 | bl_idname = "addon.load_colmap"
1048 |
1049 | @classmethod
1050 | def poll(cls, context):
1051 | return context.scene.my_tool.colmap_path != ''
1052 |
1053 | def execute(self, context):
1054 | scene = context.scene
1055 | mytool = scene.my_tool
1056 |
1057 | # remove all objects
1058 | for mesh in bpy.data.meshes:
1059 | bpy.data.meshes.remove(mesh, do_unlink=True)
1060 | for camera in bpy.data.cameras:
1061 | bpy.data.cameras.remove(camera)
1062 | for light in bpy.data.lights:
1063 | bpy.data.lights.remove(light)
1064 | for material in bpy.data.materials:
1065 | bpy.data.materials.remove(material, do_unlink=True)
1066 | for image in bpy.data.images:
1067 | bpy.data.images.remove(image, do_unlink=True)
1068 | for curve in bpy.data.curves:
1069 | bpy.data.curves.remove(curve, do_unlink=True)
1070 |
1071 | # load data
1072 | cameras, images, points3D = read_model(bpy.path.abspath(mytool.colmap_path + 'sparse/'), ext='.bin')
1073 | display_pointcloud(points3D)
1074 |
1075 | global colmap_data, point_cloud_vertices
1076 |
1077 | colmap_data = {}
1078 | colmap_data['cameras'] = cameras
1079 | colmap_data['images'] = images
1080 | colmap_data['points3D'] = points3D
1081 |
1082 | point_cloud_vertices = np.stack([point.xyz for point in points3D.values()])
1083 |
1084 | # generate bounding boxes for cropping
1085 | generate_cropping_planes()
1086 |
1087 | if os.path.isfile(bpy.path.abspath(mytool.colmap_path + 'transforms.json')):
1088 | # if previously cropped, load cropped results
1089 | with open(bpy.path.abspath(mytool.colmap_path + 'transforms.json'), 'r') as file:
1090 | transforms = json.load(file)
1091 |
1092 | max_coordinate = np.max(point_cloud_vertices, axis=0)
1093 | min_coordinate = np.min(point_cloud_vertices, axis=0)
1094 | if 'aabb_range' in transforms.keys():
1095 | bbox = transforms['aabb_range']
1096 |
1097 | slider = bpy.context.scene.my_tool.box_slider
1098 |
1099 | slider[0] = - min_coordinate[0] + bbox[0][0]
1100 | slider[1] = max_coordinate[0] - bbox[0][1]
1101 | slider[2] = - min_coordinate[1] + bbox[1][0]
1102 | slider[3] = max_coordinate[1] - bbox[1][1]
1103 | slider[4] = -min_coordinate[2] + bbox[2][0]
1104 | slider[5] = max_coordinate[2] - bbox[2][1]
1105 |
1106 | set_plane_location(bbox[0][0], bbox[0][1], bbox[1][0], bbox[1][1], bbox[2][0], bbox[2][1])
1107 | else:
1108 | reset_my_slider_to_default()
1109 | else:
1110 | reset_my_slider_to_default()
1111 |
1112 | # load camera info
1113 | load_camera(colmap_data, context)
1114 |
1115 | return {'FINISHED'}
1116 |
1117 |
1118 | class Crop(Operator):
1119 | '''
1120 | crop points outside the bounding box
1121 | note: if you want to see the result of point cropping, please follow the steps:
1122 | 1.click "crop points" button
1123 | 2.enter the edit mode
1124 | 3.hide the cropping plane
1125 | '''
1126 |
1127 | bl_label = "Crop Pointcloud"
1128 | bl_idname = "addon.crop"
1129 |
1130 | @classmethod
1131 | def poll(cls, context):
1132 | global point_cloud_vertices
1133 | return point_cloud_vertices is not None
1134 |
1135 | def execute(self, context):
1136 | if 'Point Cloud' in bpy.data.objects:
1137 | obj = bpy.context.scene.objects['Point Cloud']
1138 | bpy.context.view_layer.objects.active = obj
1139 | bpy.ops.object.mode_set(mode='OBJECT')
1140 |
1141 | global point_cloud_vertices
1142 | global select_point_index
1143 |
1144 | box_verts = np.array([v.co for v in bpy.data.objects['Bounding Box'].data.vertices])
1145 |
1146 | max_coordinate = np.max(box_verts, axis=0)
1147 | min_coordinate = np.min(box_verts, axis=0)
1148 |
1149 | x_min = min_coordinate[0]
1150 | x_max = max_coordinate[0]
1151 | y_min = min_coordinate[1]
1152 | y_max = max_coordinate[1]
1153 | z_min = min_coordinate[2]
1154 | z_max = max_coordinate[2]
1155 |
1156 | # initialization
1157 | mesh = bpy.data.objects['Point Cloud'].data
1158 | mesh.vertices.foreach_set("hide", [True] * len(mesh.vertices))
1159 | select_point_index = np.where((point_cloud_vertices[:, 0] >= x_min) &
1160 | (point_cloud_vertices[:, 0] <= x_max) &
1161 | (point_cloud_vertices[:, 1] >= y_min) &
1162 | (point_cloud_vertices[:, 1] <= y_max) &
1163 | (point_cloud_vertices[:, 2] >= z_min) &
1164 | (point_cloud_vertices[:, 2] <= z_max))
1165 |
1166 | for index in select_point_index[0]:
1167 | bpy.data.objects['Point Cloud'].data.vertices[index].hide = False
1168 |
1169 | select_all_vert('Point Cloud')
1170 |
1171 | return {'FINISHED'}
1172 |
1173 |
1174 | class BoundSphere(Operator):
1175 | '''
1176 | crop points outside the bounding box
1177 | '''
1178 |
1179 | bl_label = "Create Bounding Sphere"
1180 | bl_idname = "addon.add_bound_sphere"
1181 |
1182 | @classmethod
1183 | def poll(cls, context):
1184 | global select_point_index
1185 | if select_point_index:
1186 | return True
1187 | else:
1188 | return False
1189 |
1190 | def execute(self, context):
1191 | global point_cloud_vertices
1192 | global select_point_index
1193 | global radius
1194 | global center
1195 | global bounding_box
1196 |
1197 | delete_bounding_sphere()
1198 |
1199 | unhide_verts = point_cloud_vertices[select_point_index]
1200 |
1201 | max_coordinate = np.max(unhide_verts, axis=0)
1202 | min_coordinate = np.min(unhide_verts, axis=0)
1203 |
1204 | x_min = min_coordinate[0]
1205 | x_max = max_coordinate[0]
1206 | y_min = min_coordinate[1]
1207 | y_max = max_coordinate[1]
1208 | z_min = min_coordinate[2]
1209 | z_max = max_coordinate[2]
1210 | bounding_box = [(x_min, x_max), (y_min, y_max), (z_min, z_max)]
1211 |
1212 | center_x = (x_min + x_max) / 2
1213 | center_y = (y_min + y_max) / 2
1214 | center_z = (z_min + z_max) / 2
1215 |
1216 | radius = np.max(np.sqrt((unhide_verts[:, 0] - center_x) ** 2 + (unhide_verts[:, 1] - center_y) ** 2 + (
1217 | unhide_verts[:, 2] - center_z) ** 2))
1218 | center = (center_x, center_y, center_z)
1219 |
1220 | num_segments = 128
1221 | sphere_verts = []
1222 | sphere_faces = []
1223 |
1224 | for i in range(num_segments):
1225 | theta1 = i * 2 * np.pi / num_segments
1226 | z = radius * np.sin(theta1)
1227 | xy = radius * np.cos(theta1)
1228 | for j in range(num_segments):
1229 | theta2 = j * 2 * np.pi / num_segments
1230 | x = xy * np.sin(theta2)
1231 | y = xy * np.cos(theta2)
1232 | sphere_verts.append([center[0] + x, center[1] + y, center[2] + z])
1233 |
1234 | for i in range(num_segments - 1):
1235 | for j in range(num_segments):
1236 | idx1 = i * num_segments + j
1237 | idx2 = (i + 1) * num_segments + j
1238 | idx3 = (i + 1) * num_segments + (j + 1) % num_segments
1239 | idx4 = i * num_segments + (j + 1) % num_segments
1240 | sphere_faces.append([idx1, idx2, idx3])
1241 | sphere_faces.append([idx1, idx3, idx4])
1242 |
1243 | sphere_mesh = bpy.data.meshes.new('Bounding Sphere')
1244 | sphere_mesh.from_pydata(sphere_verts, [], sphere_faces)
1245 | sphere_mesh.update()
1246 |
1247 | sphere_obj = bpy.data.objects.new("Bounding Sphere", sphere_mesh)
1248 | bpy.context.scene.collection.objects.link(sphere_obj)
1249 |
1250 | select_all_vert('Point Cloud')
1251 |
1252 | return {'FINISHED'}
1253 |
1254 |
1255 | class HideShowBox(Operator):
1256 | bl_label = "Hide/Show Bounding Box"
1257 | bl_idname = "addon.hide_show_box"
1258 |
1259 | @classmethod
1260 | def poll(cls, context):
1261 | return point_cloud_vertices is not None
1262 |
1263 | def execute(self, context):
1264 | status = bpy.context.scene.objects['Bounding Box'].hide_get()
1265 | bpy.context.scene.objects['Bounding Box'].hide_set(not status)
1266 | bpy.context.scene.objects['x_max_label'].hide_set(not status)
1267 | bpy.context.scene.objects['x_min_label'].hide_set(not status)
1268 | bpy.context.scene.objects['y_max_label'].hide_set(not status)
1269 | bpy.context.scene.objects['y_min_label'].hide_set(not status)
1270 | bpy.context.scene.objects['z_max_label'].hide_set(not status)
1271 | bpy.context.scene.objects['z_min_label'].hide_set(not status)
1272 | return {'FINISHED'}
1273 |
1274 |
1275 | class HideShowSphere(Operator):
1276 | bl_label = "Hide/Show Bounding Sphere"
1277 | bl_idname = "addon.hide_show_sphere"
1278 |
1279 | @classmethod
1280 | def poll(cls, context):
1281 | return 'Bounding Sphere' in context.scene.collection.objects
1282 |
1283 | def execute(self, context):
1284 | status = bpy.context.scene.objects['Bounding Sphere'].hide_get()
1285 | bpy.context.scene.objects['Bounding Sphere'].hide_set(not status)
1286 | return {'FINISHED'}
1287 |
1288 |
1289 | class HideShowCroppedPoints(Operator):
1290 | bl_label = "Hide/Show Cropped Points"
1291 | bl_idname = "addon.hide_show_cropped"
1292 |
1293 | @classmethod
1294 | def poll(cls, context):
1295 | global select_point_index
1296 | if select_point_index:
1297 | return True
1298 | else:
1299 | return False
1300 |
1301 | def execute(self, context):
1302 | if 'Point Cloud' in bpy.data.objects:
1303 | obj = bpy.context.scene.objects['Point Cloud']
1304 | bpy.context.view_layer.objects.active = obj
1305 | if obj.mode == 'EDIT':
1306 | bpy.ops.object.mode_set(mode='OBJECT')
1307 | else:
1308 | bpy.ops.object.mode_set(mode='EDIT')
1309 | return {'FINISHED'}
1310 |
1311 |
1312 | class ExportSceneParameters(Operator):
1313 | bl_label = "Export Scene Parameters"
1314 | bl_idname = "addon.export_scene_param"
1315 |
1316 | @classmethod
1317 | def poll(cls, context):
1318 | return 'Bounding Sphere' in context.scene.collection.objects
1319 |
1320 | def execute(self, context):
1321 | global radius, center, colmap_data, bounding_box
1322 | intrinsic_param = np.array([camera.params for camera in colmap_data['cameras'].values()])
1323 | fl_x = intrinsic_param[0][0] # TODO: only supports single camera for now
1324 | fl_y = intrinsic_param[0][1]
1325 | cx = intrinsic_param[0][2]
1326 | cy = intrinsic_param[0][3]
1327 | image_width = np.array([camera.width for camera in colmap_data['cameras'].values()])
1328 | image_height = np.array([camera.height for camera in colmap_data['cameras'].values()])
1329 | w = image_width[0]
1330 | h = image_height[0]
1331 |
1332 | angle_x = math.atan(w / (fl_x * 2)) * 2
1333 | angle_y = math.atan(h / (fl_y * 2)) * 2
1334 |
1335 | out = {
1336 | "camera_angle_x": angle_x,
1337 | "camera_angle_y": angle_y,
1338 | "fl_x": fl_x,
1339 | "fl_y": fl_y,
1340 | "sk_x": 0.0, # TODO: check if colmap has skew
1341 | "sk_y": 0.0,
1342 | "k1": 0.0, # take undistorted images only
1343 | "k2": 0.0,
1344 | "k3": 0.0,
1345 | "k4": 0.0,
1346 | "p1": 0.0,
1347 | "p2": 0.0,
1348 | "is_fisheye": False, # TODO: not supporting fish eye camera
1349 | "cx": cx,
1350 | "cy": cy,
1351 | "w": int(w),
1352 | "h": int(h),
1353 | "aabb_scale": np.exp2(np.rint(np.log2(radius))), # power of two, for INGP resolution computation
1354 | "aabb_range": bounding_box,
1355 | "sphere_center": center,
1356 | "sphere_radius": radius,
1357 | "frames": []
1358 | }
1359 |
1360 | flip_mat = np.array([
1361 | [1, 0, 0, 0],
1362 | [0, -1, 0, 0],
1363 | [0, 0, -1, 0],
1364 | [0, 0, 0, 1]
1365 | ])
1366 |
1367 | path = bpy.context.scene.my_tool.colmap_path
1368 |
1369 | # read poses
1370 | for img in colmap_data['images'].values():
1371 | rotation = qvec2rotmat(img.qvec)
1372 | translation = img.tvec.reshape(3, 1)
1373 | w2c = np.concatenate([rotation, translation], 1)
1374 | w2c = np.concatenate([w2c, np.array([0, 0, 0, 1])[None]], 0)
1375 | c2w = np.linalg.inv(w2c)
1376 | c2w = c2w @ flip_mat # convert to GL convention used in iNGP
1377 |
1378 | frame = {"file_path": 'images/' + img.name, "transform_matrix": c2w.tolist()}
1379 | # print(frame)
1380 | out["frames"].append(frame)
1381 |
1382 | file_path = bpy.path.abspath(bpy.context.scene.my_tool.colmap_path + 'transforms.json')
1383 | with open(file_path, "w") as outputfile:
1384 | json.dump(out, outputfile, indent=2)
1385 | return {'FINISHED'}
1386 |
1387 | def invoke(self, context, event):
1388 | wm = context.window_manager
1389 | return wm.invoke_props_dialog(self)
1390 |
1391 | def draw(self, context):
1392 | layout = self.layout
1393 | file_path = bpy.path.abspath(bpy.context.scene.my_tool.colmap_path + 'transforms.json')
1394 | layout.row().label(text="Parameters exported to " + file_path)
1395 |
1396 |
1397 | class HideShowImagePlane(Operator):
1398 | bl_label = "Hide/Show Image Plane"
1399 | bl_idname = "addon.hide_show_cam_plane"
1400 |
1401 | @classmethod
1402 | def poll(cls, context):
1403 | return 'Image Plane' in context.scene.collection.objects and colmap_data is not None
1404 |
1405 | def execute(self, context):
1406 | status = bpy.context.scene.objects['Image Plane'].hide_get()
1407 | bpy.context.scene.objects['Image Plane'].hide_set(not status)
1408 | return {'FINISHED'}
1409 |
1410 |
1411 | class HighlightPointcloud(Operator):
1412 | bl_label = "Highlight Pointcloud"
1413 | bl_idname = "addon.highlight_pointcloud"
1414 |
1415 | @classmethod
1416 | def poll(cls, context):
1417 | # do not enable when point cloud is not loaded
1418 | return 'Point Cloud' in context.scene.collection.objects and colmap_data is not None
1419 |
1420 | def execute(self, context):
1421 | select_all_vert('Point Cloud')
1422 | return {'FINISHED'}
1423 |
1424 |
1425 | # ------------------------------------------------------------------------
1426 | # Panel
1427 | # ------------------------------------------------------------------------
1428 |
1429 | class NeuralangeloCustomPanel(bpy.types.Panel):
1430 | bl_category = "Neuralangelo"
1431 | bl_space_type = "VIEW_3D"
1432 | bl_region_type = "UI"
1433 |
1434 |
1435 | class MainPanel(NeuralangeloCustomPanel, bpy.types.Panel):
1436 | bl_idname = "BN_PT_main"
1437 | bl_label = "Neuralangelo Addon"
1438 |
1439 | def draw(self, context):
1440 | scene = context.scene
1441 | layout = self.layout
1442 | mytool = scene.my_tool
1443 |
1444 |
1445 | class LoadingPanel(NeuralangeloCustomPanel, bpy.types.Panel):
1446 | bl_parent_id = "BN_PT_main"
1447 | bl_idname = "BN_PT_loading"
1448 | bl_label = "Load Data"
1449 |
1450 | def draw(self, context):
1451 | scene = context.scene
1452 | layout = self.layout
1453 | mytool = scene.my_tool
1454 |
1455 | layout.prop(mytool, "colmap_path")
1456 | layout.operator("addon.load_colmap")
1457 | layout.separator()
1458 |
1459 |
1460 | class InspectionPanel(NeuralangeloCustomPanel, bpy.types.Panel):
1461 | bl_parent_id = "BN_PT_main"
1462 | bl_idname = "BN_PT_inspection"
1463 | bl_label = "Inspect COLMAP Results"
1464 |
1465 | def draw(self, context):
1466 | scene = context.scene
1467 | layout = self.layout
1468 | mytool = scene.my_tool
1469 |
1470 | # visualization
1471 | box = layout.box()
1472 | row = box.row()
1473 | row.alignment = 'CENTER'
1474 | row.label(text="Visualization")
1475 |
1476 | row = box.row(align=True)
1477 | row.prop(mytool, "transparency_toggle")
1478 | sub = row.row()
1479 | sub.prop(mytool, "transparency_slider", slider=True, text='Transparency of Objects')
1480 | sub.enabled = mytool.transparency_toggle
1481 |
1482 | box.row().operator("addon.hide_show_cam_plane")
1483 | box.row().operator("addon.hide_show_box")
1484 | box.row().operator("addon.highlight_pointcloud")
1485 | row = box.row()
1486 | row.alignment = 'CENTER'
1487 | row.label(text="Slide Image Along View")
1488 | box.row().prop(mytool, "imagedepth_slider", slider=True, text='Image plane depth')
1489 |
1490 |
1491 | class BoundingPanel(NeuralangeloCustomPanel, bpy.types.Panel):
1492 | bl_parent_id = "BN_PT_main"
1493 | bl_idname = "BN_PT_bounding"
1494 | bl_label = "Define Bounding Region"
1495 |
1496 | def draw(self, context):
1497 | scene = context.scene
1498 | layout = self.layout
1499 | mytool = scene.my_tool
1500 |
1501 | # bounding box
1502 | box = layout.box()
1503 | row = box.row()
1504 | row.alignment = 'CENTER'
1505 | row.label(text="Edit Bounding Box")
1506 |
1507 | x_row = box.row()
1508 | x_row.prop(mytool, "box_slider", index=0, slider=True, text='X min')
1509 | x_row.prop(mytool, "box_slider", index=1, slider=True, text='X max')
1510 |
1511 | y_row = box.row()
1512 | y_row.prop(mytool, "box_slider", index=2, slider=True, text='Y min')
1513 | y_row.prop(mytool, "box_slider", index=3, slider=True, text='Y max')
1514 |
1515 | z_row = box.row()
1516 | z_row.prop(mytool, "box_slider", index=4, slider=True, text='Z min')
1517 | z_row.prop(mytool, "box_slider", index=5, slider=True, text='Z max')
1518 |
1519 | box.separator()
1520 | row = box.row()
1521 | row.operator("addon.crop")
1522 | row.operator("addon.hide_show_cropped")
1523 |
1524 | layout.separator()
1525 |
1526 | # bounding sphere
1527 | box = layout.box()
1528 | row = box.row()
1529 | row.alignment = 'CENTER'
1530 | row.label(text="Create Bounding Sphere")
1531 | box.row().operator("addon.add_bound_sphere")
1532 | box.row().operator("addon.hide_show_sphere")
1533 | box.row().operator('addon.export_scene_param')
1534 |
1535 |
1536 | # ------------------------------------------------------------------------
1537 | # Registration
1538 | # ------------------------------------------------------------------------
1539 |
1540 | classes = (
1541 | MyProperties,
1542 | MainPanel,
1543 | LoadingPanel,
1544 | InspectionPanel,
1545 | BoundingPanel,
1546 | LoadCOLMAP,
1547 | Crop,
1548 | BoundSphere,
1549 | HideShowBox,
1550 | HideShowSphere,
1551 | HideShowCroppedPoints,
1552 | HideShowImagePlane,
1553 | ExportSceneParameters,
1554 | HighlightPointcloud
1555 | )
1556 |
1557 |
1558 | def register():
1559 | from bpy.utils import register_class
1560 | for cls in classes:
1561 | register_class(cls)
1562 |
1563 | bpy.types.Scene.my_tool = PointerProperty(type=MyProperties)
1564 |
1565 |
1566 | def unregister():
1567 | from bpy.utils import unregister_class
1568 | for cls in reversed(classes):
1569 | unregister_class(cls)
1570 | del bpy.types.Scene.my_tool
1571 |
1572 |
1573 | if __name__ == "__main__":
1574 | register()
1575 |
--------------------------------------------------------------------------------
/start_blender_with_addon.py:
--------------------------------------------------------------------------------
1 | import bpy
2 | import os
3 |
4 | dir_path = os.path.dirname(os.path.realpath(__file__))
5 |
6 | # install
7 | bpy.ops.preferences.addon_install(filepath=os.path.join(dir_path, "neuralangelo_addon.py"))
8 | # enable
9 | bpy.ops.preferences.addon_enable(module='neuralangelo_addon')
--------------------------------------------------------------------------------
/toy_example/images/000001.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000001.jpg
--------------------------------------------------------------------------------
/toy_example/images/000002.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000002.jpg
--------------------------------------------------------------------------------
/toy_example/images/000003.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000003.jpg
--------------------------------------------------------------------------------
/toy_example/images/000004.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000004.jpg
--------------------------------------------------------------------------------
/toy_example/images/000005.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000005.jpg
--------------------------------------------------------------------------------
/toy_example/images/000006.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000006.jpg
--------------------------------------------------------------------------------
/toy_example/images/000007.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000007.jpg
--------------------------------------------------------------------------------
/toy_example/images/000008.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000008.jpg
--------------------------------------------------------------------------------
/toy_example/images/000009.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000009.jpg
--------------------------------------------------------------------------------
/toy_example/images/000010.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000010.jpg
--------------------------------------------------------------------------------
/toy_example/images/000011.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000011.jpg
--------------------------------------------------------------------------------
/toy_example/images/000012.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000012.jpg
--------------------------------------------------------------------------------
/toy_example/images/000013.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000013.jpg
--------------------------------------------------------------------------------
/toy_example/images/000020.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000020.jpg
--------------------------------------------------------------------------------
/toy_example/images/000021.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000021.jpg
--------------------------------------------------------------------------------
/toy_example/images/000023.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000023.jpg
--------------------------------------------------------------------------------
/toy_example/images/000024.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/images/000024.jpg
--------------------------------------------------------------------------------
/toy_example/run-colmap-geometric.sh:
--------------------------------------------------------------------------------
1 | # You must set $COLMAP_EXE_PATH to
2 | # the directory containing the COLMAP executables.
3 | $COLMAP_EXE_PATH/patch_match_stereo \
4 | --workspace_path . \
5 | --workspace_format COLMAP \
6 | --PatchMatchStereo.max_image_size 2000 \
7 | --PatchMatchStereo.geom_consistency true
8 | $COLMAP_EXE_PATH/stereo_fusion \
9 | --workspace_path . \
10 | --workspace_format COLMAP \
11 | --input_type geometric \
12 | --output_path ./fused.ply
13 | $COLMAP_EXE_PATH/poisson_mesher \
14 | --input_path ./fused.ply \
15 | --output_path ./meshed-poisson.ply
16 | $COLMAP_EXE_PATH/delaunay_mesher \
17 | --input_path . \
18 | --input_type dense
19 | --output_path ./meshed-delaunay.ply
20 |
--------------------------------------------------------------------------------
/toy_example/run-colmap-photometric.sh:
--------------------------------------------------------------------------------
1 | # You must set $COLMAP_EXE_PATH to
2 | # the directory containing the COLMAP executables.
3 | $COLMAP_EXE_PATH/patch_match_stereo \
4 | --workspace_path . \
5 | --workspace_format COLMAP \
6 | --PatchMatchStereo.max_image_size 2000 \
7 | --PatchMatchStereo.geom_consistency false
8 | $COLMAP_EXE_PATH/stereo_fusion \
9 | --workspace_path . \
10 | --workspace_format COLMAP \
11 | --input_type photometric \
12 | --output_path ./fused.ply
13 | $COLMAP_EXE_PATH/poisson_mesher \
14 | --input_path ./fused.ply \
15 | --output_path ./meshed-poisson.ply
16 | $COLMAP_EXE_PATH/delaunay_mesher \
17 | --input_path . \
18 | --input_type dense
19 | --output_path ./meshed-delaunay.ply
20 |
--------------------------------------------------------------------------------
/toy_example/sparse/cameras.bin:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/sparse/cameras.bin
--------------------------------------------------------------------------------
/toy_example/sparse/images.bin:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/sparse/images.bin
--------------------------------------------------------------------------------
/toy_example/sparse/points3D.bin:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mli0603/BlenderNeuralangelo/3d87fb75d6f4a73dcd9a6ba49386922fa048f100/toy_example/sparse/points3D.bin
--------------------------------------------------------------------------------
/toy_example/stereo/fusion.cfg:
--------------------------------------------------------------------------------
1 | 000001.jpg
2 | 000002.jpg
3 | 000003.jpg
4 | 000004.jpg
5 | 000005.jpg
6 | 000006.jpg
7 | 000007.jpg
8 | 000008.jpg
9 | 000009.jpg
10 | 000010.jpg
11 | 000011.jpg
12 | 000012.jpg
13 | 000013.jpg
14 | 000020.jpg
15 | 000021.jpg
16 | 000023.jpg
17 | 000024.jpg
18 |
--------------------------------------------------------------------------------
/toy_example/stereo/patch-match.cfg:
--------------------------------------------------------------------------------
1 | 000001.jpg
2 | __auto__, 20
3 | 000002.jpg
4 | __auto__, 20
5 | 000003.jpg
6 | __auto__, 20
7 | 000004.jpg
8 | __auto__, 20
9 | 000005.jpg
10 | __auto__, 20
11 | 000006.jpg
12 | __auto__, 20
13 | 000007.jpg
14 | __auto__, 20
15 | 000008.jpg
16 | __auto__, 20
17 | 000009.jpg
18 | __auto__, 20
19 | 000010.jpg
20 | __auto__, 20
21 | 000011.jpg
22 | __auto__, 20
23 | 000012.jpg
24 | __auto__, 20
25 | 000013.jpg
26 | __auto__, 20
27 | 000020.jpg
28 | __auto__, 20
29 | 000021.jpg
30 | __auto__, 20
31 | 000023.jpg
32 | __auto__, 20
33 | 000024.jpg
34 | __auto__, 20
35 |
--------------------------------------------------------------------------------