├── README.md ├── create_calibration_targets.sh ├── depth_camera_array ├── __init__.py ├── camera.py ├── create_calibration_targets.py ├── perform_aruco_detection.py ├── perform_calibration.py ├── perform_measurement.py └── utilities.py ├── perform_aruco_detection.sh ├── perform_calibration.sh ├── perform_measurement.sh ├── requirements.txt ├── run_tests.sh ├── setup.py └── tests ├── __init__.py ├── test_perform_aruco_detection.py └── test_perform_calibration.py /README.md: -------------------------------------------------------------------------------- 1 | # Depth Camera Array 2 | ## Overview 3 | This software allows to calibrate multiple RealSense devices to one world-coordinate-system. Objects can be measured and 4 | exported in *.ply format. 5 | 6 | ## Installation Instructions 7 | ### Prerequisites 8 | Install the [Intel RealSense SDK 2.0](https://www.intelrealsense.com/developers/) for your platform. 9 | 10 | ### Install manually 11 | 1. Clone the repository: 12 | ```bash 13 | git clone https://github.com/matthias-hirzle/depth_camera_array.git && cd depth_camera_array 14 | ``` 15 | 2. Create a new virtual environment: 16 | ```bash 17 | python3 -m venv venv 18 | ``` 19 | 3. Activate the virtual environment: 20 | ```bash 21 | source venv/bin/activate 22 | ``` 23 | 4. Install requirements: 24 | ```bash 25 | pip install -r requirements.txt 26 | ``` 27 | 28 | ### Install via pip 29 | ```bash 30 | pip install git+https://github.com/matthias-hirzle/depth_camera_array.git@master#egg=depth_camera_array 31 | ``` 32 | 33 | ## Usage 34 | ### Calibration Prerequisites 35 | #### Create the Aruco calibration targets: 36 | Activate your virtual environment and run script `./create_calibration_targets.sh` to create the calibration targets as 37 | PDF files. You can pass two optional arguments: 38 | - `--target_count=`: Number of targets to calculate relative extrinsic parameters between cameras. 39 | If not set, `5` targets will be created. 40 | - `--data_dir=`: A path to the directory where the created target files should be stored to. If the directory 41 | does not exist, it will be created. If not set, `./data/` will be used as directory. 42 | 43 | As a result of the script execution you'll get: one `bottom_target.pdf` and multiple `relative_target_..._front.pdf` files. 44 | For each `...front.pdf` file, one `...back.pdf` will be created. Those pdf files should look similar to the images 45 | below. 46 | ![bottom_target](https://user-images.githubusercontent.com/44577643/75158186-e2d8a500-5715-11ea-8d8b-ccb845796f17.png) 47 | ![relative_target](https://user-images.githubusercontent.com/44577643/75158326-292e0400-5716-11ea-9479-fc4c3a662982.png) 48 | 49 | Print the files and stick the corresponding sheets with relative targets together. Make sure that the corners of the 50 | corresponding Aruco markers match each other exactly. 51 | 52 | ### Calibration 53 | #### Place the calibration targets: 54 | Each camera should be able to see at least 3 Relative Aruco targets. 55 | There should be one camera that can see every Relative Target and also the Bottom one. This camera will be detected and 56 | initialized as the Base Camera. The Bottom target should be placed according to the desired orientation of the world 57 | coordinate system. The more Aruco targets each camera detects, the better the calibration. 58 | 59 | ![camera_array](https://user-images.githubusercontent.com/44577643/75285967-e18fa100-5817-11ea-9cc0-15a448225066.png) 60 | > Use the RealSense Viewer tool to check the view of cameras. 61 | 62 | #### Detect calibration targets: 63 | Connect your RealSense devices to your computer. Make sure that you use USB 3.0 connections. Not all RealSense devices 64 | need to be connected at the same time. It is also possible to run the target detection for separate camera 65 | groups one after another or even for each camera separate. This makes sense if you don't have enough USB 3.0 ports or 66 | if your USB-cables are too short to connect all devices at once. Run the following script for each camera group connected 67 | to your computer: 68 | ```bash 69 | ./perform_aruco_detection.sh 70 | ``` 71 | This will create a `_reference_points.json` file containing information about the detected aruco markers for each 72 | connected device and store them in the `./data/` folder. Pass the argument `--remove_old_data` to remove obsolete files 73 | created by a previous calibration with another camera setup in that folder. You can also choose the destination 74 | directory for your `...refernce_points.json` files by passing the argument `--data_dir=`. 75 | > Do not move the calibration targets until you ran the detection for every device in your RealSense array. 76 | 77 | #### Calibrate extrinsic: 78 | After detecting the positions of the calibration targets run the following script: 79 | ```bash 80 | ./perform_calibration 81 | ``` 82 | This will load the previously created `_reference_points.json` files and use them to determine the extrinsic 83 | parameters for each device. If these files are not located in the default `./data/` directory, pass the argument 84 | `--data_dir=` to define the location. This script creates a file `camera_array.json` that contains extrinsic 85 | parameters as 4x4 homogeneous transformation matrices for each device. 86 | > Use the RealSense Viewer tool to check the type of usb connection. 87 | 88 | ### Measurement 89 | TODO -------------------------------------------------------------------------------- /create_calibration_targets.sh: -------------------------------------------------------------------------------- 1 | python -m depth_camera_array.create_calibration_targets $* -------------------------------------------------------------------------------- /depth_camera_array/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matthias-hirzle/depth_camera_array/ecf1fdb4066f062b27939227e0348507c3872f5e/depth_camera_array/__init__.py -------------------------------------------------------------------------------- /depth_camera_array/camera.py: -------------------------------------------------------------------------------- 1 | from typing import List, Tuple 2 | 3 | import numpy as np 4 | from pyrealsense2 import pyrealsense2 as rs 5 | 6 | 7 | class Camera: 8 | def __init__(self, device_id: str, context: rs.context): 9 | resolution_width = 1280 10 | resolution_height = 720 11 | frame_rate = 30 12 | 13 | self.device_id = device_id 14 | self._context = context 15 | 16 | self._pipeline = rs.pipeline() 17 | self._config = rs.config() 18 | self._config.enable_device(self.device_id) 19 | self._config.enable_stream(rs.stream.depth, resolution_width, resolution_height, rs.format.z16, frame_rate) 20 | self._config.enable_stream(rs.stream.color, resolution_width, resolution_height, rs.format.bgr8, frame_rate) 21 | 22 | self._pipeline_profile: rs.pipeline_profile = self._pipeline.start(self._config) 23 | self._depth_scale = self._pipeline_profile.get_device().first_depth_sensor().get_depth_scale() 24 | 25 | def poll_frames(self) -> rs.composite_frame: 26 | """Returns a frames object with each available frame type""" 27 | frames = self._pipeline.wait_for_frames() 28 | return frames 29 | 30 | def close(self): 31 | self._pipeline.stop() 32 | 33 | def image_points_to_object_points(self, color_pixels: np.array, frames: rs.composite_frame) -> List[ 34 | Tuple[float, float, float]]: 35 | """Calculates the object points for given pixel coordinates of rgb data""" 36 | color_frame: rs.video_frame = frames.get_color_frame() 37 | depth_frame: rs.depth_frame = frames.get_depth_frame() 38 | 39 | color_profile = color_frame.get_profile().as_video_stream_profile() 40 | depth_profile = depth_frame.get_profile().as_video_stream_profile() 41 | 42 | color_intrinsics = color_profile.get_intrinsics() 43 | depth_intrinsics = depth_profile.get_intrinsics() 44 | 45 | color_to_depth_extrinsics = color_profile.get_extrinsics_to(depth_profile) 46 | depth_to_color_extrinsics = depth_profile.get_extrinsics_to(color_profile) 47 | 48 | depth_pixels = [ 49 | rs.rs2_project_color_pixel_to_depth_pixel( 50 | data=depth_frame.get_data(), 51 | depth_scale=self._depth_scale, 52 | depth_min=0.1, 53 | depth_max=10.0, 54 | depth_intrin=depth_intrinsics, 55 | color_intrin=color_intrinsics, 56 | depth_to_color=depth_to_color_extrinsics, 57 | color_to_depth=color_to_depth_extrinsics, 58 | from_pixel=color_pixel 59 | ) 60 | for color_pixel in color_pixels 61 | ] 62 | 63 | object_points = [] 64 | for pixel in depth_pixels: 65 | depth = depth_frame.get_distance(int(round(pixel[0], 0)), int(round(pixel[1], 0))) 66 | object_points.append(rs.rs2_deproject_pixel_to_point(intrin=depth_intrinsics, pixel=pixel, depth=depth)) 67 | 68 | return object_points 69 | 70 | def depth_frame_to_object_points(self, frames: rs.composite_frame) -> np.array: 71 | depth_frame: rs.depth_frame = frames.get_depth_frame() 72 | depth_profile = depth_frame.get_profile().as_video_stream_profile() 73 | depth_intrinsics = depth_profile.get_intrinsics() 74 | object_points = [] 75 | for x in range(depth_intrinsics.width): 76 | for y in range(depth_intrinsics.height): 77 | pixel = [x, y] 78 | depth = depth_frame.get_distance(x, y) 79 | object_points.append( 80 | rs.rs2_deproject_pixel_to_point(intrin=depth_intrinsics, pixel=pixel, depth=depth)) 81 | return np.array(object_points) 82 | 83 | 84 | def extract_color_image(frames: rs.composite_frame) -> np.ndarray: 85 | return np.asanyarray(frames.get_color_frame().get_data()) 86 | 87 | 88 | def _find_connected_devices(context): 89 | devices = [] 90 | for device in context.devices: 91 | if device.get_info(rs.camera_info.name).lower() != 'platform camera': 92 | devices.append(device.get_info(rs.camera_info.serial_number)) 93 | return devices 94 | 95 | 96 | def initialize_connected_cameras() -> List[Camera]: 97 | context = rs.context() 98 | device_ids = _find_connected_devices(context) 99 | 100 | devices = [Camera(device_id, context) for device_id in device_ids] 101 | return devices 102 | 103 | 104 | def close_connected_cameras(cameras: List[Camera]): 105 | for camera in cameras: 106 | camera.close() 107 | -------------------------------------------------------------------------------- /depth_camera_array/create_calibration_targets.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | 4 | import matplotlib as mpl 5 | import matplotlib.pyplot as plt 6 | from cv2 import aruco 7 | 8 | from depth_camera_array.utilities import DEFAULT_DATA_DIR, create_if_not_exists 9 | 10 | 11 | def parse_args() -> argparse.Namespace: 12 | parser = argparse.ArgumentParser('Creates calibration aruco targets for extrinsic calibration between multiple ' 13 | 'RealSense devices and a world coordinate system.') 14 | parser.add_argument('--data_dir', type=lambda item: create_if_not_exists(item), default=DEFAULT_DATA_DIR, 15 | help='A path to the directory where the created target files will be stored. If the directory ' 16 | 'does not exist, it will be created. If not set, ./data/ will be used as directory.') 17 | parser.add_argument('--target_count', type=int, default=5, 18 | help='Number of targets to calculate relative extrinsic between cameras. If not set, 5 targets ' 19 | 'will be created.') 20 | return parser.parse_args() 21 | 22 | 23 | def create_bottom_target(data_dir): 24 | shape = (3, 4,) 25 | fig = plt.figure() 26 | fig.suptitle('Bottom Targets (1-3)', color='gray') 27 | for index, aruco_id in [(1, 2,), (4, 1,), (9, 3,)]: 28 | ax = fig.add_subplot(*shape, index) 29 | img = aruco.drawMarker(aruco.Dictionary_get(aruco.DICT_5X5_250), index, 700) 30 | plt.imshow(img, cmap=mpl.cm.gray, interpolation="nearest") 31 | ax.set_title(aruco_id, color='gray') 32 | ax.axis('off') 33 | 34 | # Z-Axis 35 | ax = fig.add_subplot(*shape, 5) 36 | ax.annotate('', xy=(0.5, 0), xytext=(0.5, 1), arrowprops=dict(edgecolor='gray', facecolor='gray', shrink=0.05)) 37 | ax.axis('off') 38 | ax.text(0.3, 0.5, 'Z', color='gray') 39 | plt.savefig(os.path.join(data_dir, 'bottom_target.pdf')) 40 | 41 | # X-Axis 42 | ax = fig.add_subplot(3, 3, 2) 43 | ax.annotate('', xy=(1, 0.5), xytext=(0, 0.5), arrowprops=dict(edgecolor='gray', facecolor='gray', shrink=0.05)) 44 | ax.text(0.45, 0.6, 'X', color='gray') 45 | ax.axis('off') 46 | plt.savefig(os.path.join(data_dir, 'bottom_target.pdf')) 47 | 48 | 49 | def create_relative_targets(target_count, min_aruco_id, data_dir): 50 | shape = (2, 3,) 51 | aruco_id = min_aruco_id 52 | for i in range(target_count): 53 | fig = plt.figure() 54 | fig.suptitle(f'Front Targets ({aruco_id}-{aruco_id + 3})', color='gray') 55 | for k in [1, 2, 4, 5]: 56 | ax = fig.add_subplot(*shape, k) 57 | img = aruco.drawMarker(aruco.Dictionary_get(aruco.DICT_5X5_250), aruco_id, 700) 58 | plt.imshow(img, cmap=mpl.cm.gray, interpolation="nearest") 59 | ax.axis('off') 60 | ax.set_title(aruco_id, color='gray') 61 | aruco_id += 1 62 | 63 | plt.savefig(os.path.join(data_dir, f'relative_target_{aruco_id - 4}_to_{aruco_id-1}_front.pdf')) 64 | 65 | aruco_id = min_aruco_id 66 | for i in range(target_count): 67 | fig = plt.figure() 68 | fig.suptitle(f'Back Targets ({aruco_id}-{aruco_id + 3})', color='gray') 69 | for k in [3, 2, 6, 5]: 70 | ax = fig.add_subplot(*shape, k) 71 | img = aruco.drawMarker(aruco.Dictionary_get(aruco.DICT_5X5_250), aruco_id, 700) 72 | plt.imshow(img, cmap=mpl.cm.gray, interpolation="nearest") 73 | ax.axis('off') 74 | ax.set_title(aruco_id, color='gray') 75 | aruco_id += 1 76 | 77 | plt.savefig(os.path.join(data_dir, f'relative_target_{aruco_id - 4}_to_{aruco_id - 1}_back.pdf')) 78 | 79 | def main(): 80 | args = parse_args() 81 | create_bottom_target(data_dir=args.data_dir) 82 | create_relative_targets(args.target_count, 4, args.data_dir) 83 | 84 | 85 | if __name__ == '__main__': 86 | main() 87 | -------------------------------------------------------------------------------- /depth_camera_array/perform_aruco_detection.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | from typing import List, Tuple 4 | 5 | import numpy as np 6 | import rmsd 7 | from cv2 import aruco 8 | 9 | from depth_camera_array.camera import initialize_connected_cameras, extract_color_image, close_connected_cameras 10 | from depth_camera_array.utilities import create_if_not_exists, dump_dict_as_json, DEFAULT_DATA_DIR 11 | 12 | 13 | def parse_args() -> argparse.Namespace: 14 | parser = argparse.ArgumentParser('Detects all available acuro markers') 15 | parser.add_argument('--data_dir', type=lambda item: create_if_not_exists(item), required=False, 16 | help='Data location to load and dump config files', default=DEFAULT_DATA_DIR) 17 | parser.add_argument('--remove_previous_data', action='store_true', 18 | help='If set, each reference-point file in data_dir will be removed before performing new ' 19 | 'detection.') 20 | return parser.parse_args() 21 | 22 | 23 | def remove_previous_data(data_dir: str): 24 | for file in os.listdir(data_dir): 25 | if file.endswith('_reference_points.json'): 26 | os.remove(os.path.join(data_dir, file)) 27 | 28 | 29 | def main(): 30 | args = parse_args() 31 | if args.remove_previous_data: 32 | remove_previous_data(args.data_dir) 33 | 34 | cameras = initialize_connected_cameras() 35 | for camera in cameras: 36 | frames = camera.poll_frames() 37 | color_frame = extract_color_image(frames) 38 | 39 | aruco_corners_image_points, aruco_ids = detect_aruco_targets(color_frame) 40 | aruco_centers_image_points = [determine_aruco_center(corners) for corners in aruco_corners_image_points] 41 | aruco_centers_object_points = camera.image_points_to_object_points(aruco_centers_image_points, frames) 42 | 43 | dump_reference_points(camera.device_id, aruco_ids, aruco_centers_object_points, args.data_dir) 44 | close_connected_cameras(cameras) 45 | 46 | 47 | def detect_aruco_targets(rgb_image: np.array) -> Tuple[np.array, List[int]]: 48 | aruco_corners, aruco_ids, _ = aruco.detectMarkers(rgb_image, aruco.Dictionary_get(aruco.DICT_5X5_250)) 49 | return np.array([item[0] for item in aruco_corners]), [item[0] for item in aruco_ids] 50 | 51 | 52 | def determine_aruco_center(corners: np.array) -> np.array: 53 | assert corners.shape == (4, 2,) 54 | return rmsd.centroid(corners) 55 | 56 | 57 | def dump_reference_points(device_id: str, aruco_ids: List[int], aruco_centers: List[np.array], data_dir: str): 58 | reference_points = { 59 | 'camera_id': device_id, 60 | 'aruco': aruco_ids, 61 | 'centers': aruco_centers 62 | } 63 | dump_dict_as_json(reference_points, os.path.join(data_dir, f'{device_id}_reference_points.json')) 64 | 65 | 66 | if __name__ == '__main__': 67 | main() 68 | -------------------------------------------------------------------------------- /depth_camera_array/perform_calibration.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | from typing import Tuple, Dict, Iterable, List, Union 4 | 5 | import numpy as np 6 | import rmsd 7 | 8 | from depth_camera_array.utilities import create_if_not_exists, load_json_to_dict, dump_dict_as_json, DEFAULT_DATA_DIR 9 | 10 | ReferencePoints = Dict[str, Dict[str, List[Union[int, Tuple[float, float, float]]]]] 11 | 12 | 13 | def parse_args() -> argparse.Namespace: 14 | parser = argparse.ArgumentParser('Performes an extrinsic calibration for each available camera') 15 | parser.add_argument('--data_dir', type=lambda item: create_if_not_exists(item), default=DEFAULT_DATA_DIR, 16 | help='Data location to load and dump config files') 17 | return parser.parse_args() 18 | 19 | 20 | def read_aruco_data(data_dir: str) -> dict: 21 | calibration_data = {} 22 | for file in os.listdir(data_dir): 23 | if file.endswith("_reference_points.json"): 24 | reference_points = load_json_to_dict(os.path.join(data_dir, file)) 25 | calibration_data[reference_points['camera_id']] = { 26 | 'aruco': reference_points['aruco'], 27 | 'centers': reference_points['centers'] 28 | } 29 | return calibration_data 30 | 31 | 32 | def calculate_transformation_kabsch(src_points: np.ndarray, dst_points: np.ndarray) -> Tuple[np.array, float]: 33 | """ 34 | Calculates the optimal rigid transformation from src_points to 35 | dst_points 36 | (regarding the least squares error) 37 | Parameters: 38 | ----------- 39 | src_points: array 40 | (3,N) matrix 41 | dst_points: array 42 | (3,N) matrix 43 | 44 | Returns: 45 | ----------- 46 | rotation_matrix: array 47 | (3,3) matrix 48 | 49 | translation_vector: array 50 | (3,1) matrix 51 | rmsd_value: float 52 | """ 53 | assert src_points.shape == dst_points.shape 54 | if src_points.shape[0] != 3: 55 | raise Exception("The input data matrix had to be transposed in order to compute transformation.") 56 | 57 | src_points = src_points.transpose() 58 | dst_points = dst_points.transpose() 59 | 60 | src_points_centered = src_points - rmsd.centroid(src_points) 61 | dst_points_centered = dst_points - rmsd.centroid(dst_points) 62 | 63 | rotation_matrix = rmsd.kabsch(src_points_centered, dst_points_centered) 64 | rmsd_value = rmsd.kabsch_rmsd(src_points_centered, dst_points_centered) 65 | 66 | translation_vector = rmsd.centroid(dst_points) - np.matmul(rmsd.centroid(src_points), rotation_matrix) 67 | 68 | return create_homogenous(rotation_matrix.transpose(), translation_vector.transpose()), rmsd_value 69 | 70 | 71 | def create_homogenous(rotation_matrix: np.array, translation_vector: np.array) -> np.array: 72 | homogenous = np.append(rotation_matrix, [[vec] for vec in translation_vector], axis=1) 73 | homogenous = np.append(homogenous, np.array([[0, 0, 0, 1]]), axis=0) 74 | return homogenous 75 | 76 | 77 | def define_base_camera(aruco_data: ReferencePoints) -> str: 78 | """Finds the camera that detected each bottom target and also detected the highest amount of other targets""" 79 | base_camera = None 80 | detected_targets = 0 81 | bottom_arucos = {1, 2, 3} 82 | for k, v in aruco_data.items(): 83 | if bottom_arucos <= set(v['aruco']) and len(v['aruco']) > detected_targets: 84 | base_camera = k 85 | detected_targets = len(v['aruco']) 86 | 87 | assert base_camera is not None 88 | return base_camera 89 | 90 | 91 | def calculate_relative_transformations(aruco_data: dict, base_camera: str) -> Dict[str, np.array]: 92 | transformations = { 93 | base_camera: np.array( 94 | [[1, 0, 0, 0], 95 | [0, 1, 0, 0], 96 | [0, 0, 1, 0], 97 | [0, 0, 0, 1]], 98 | dtype=float) 99 | } 100 | dst_arucos = aruco_data[base_camera]['aruco'] 101 | dst_points = aruco_data[base_camera]['centers'] 102 | for k, v in aruco_data.items(): 103 | if not k == base_camera: 104 | # 1. intersect arucos 105 | src_arucos = v['aruco'] 106 | src_points = v['centers'] 107 | intersection = set(dst_arucos).intersection(set(src_arucos)) 108 | # 2. create two sorted lists of points 109 | assert len(intersection) > 2 110 | dst_sorted = [] 111 | src_sorted = [] 112 | for aruco_id in intersection: 113 | dst_sorted.append(dst_points[dst_arucos.index(aruco_id)]) 114 | src_sorted.append(src_points[src_arucos.index(aruco_id)]) 115 | 116 | transformation, rmsd_value = calculate_transformation_kabsch(np.array(src_sorted).transpose(), 117 | np.array(dst_sorted).transpose()) 118 | print("RMS error for calibration with device number", k, "is :", rmsd_value, "m") 119 | transformations[k] = transformation 120 | 121 | return transformations 122 | 123 | 124 | def calculate_absolute_transformations(reference_points: Iterable[Tuple[int, Tuple[float, float, float]]]) -> np.array: 125 | # find bottom markers 126 | src_x, src_center, src_z = None, None, None 127 | for marker_id, point in reference_points: 128 | if marker_id == 1: 129 | src_x = np.array(point) 130 | elif marker_id == 2: 131 | src_center = np.array(point) 132 | elif marker_id == 3: 133 | src_z = np.array(point) 134 | 135 | assert not (src_x is None or src_center is None or src_z is None) 136 | 137 | dst_center = np.array([0., 0., 0.]) 138 | dst_x = np.array([np.linalg.norm(src_center - src_x), 0., 0.]) 139 | dst_z = np.array([0., 0., np.linalg.norm(src_center - src_z)]) 140 | 141 | src_points = np.array([src_x, src_z, src_center]) 142 | dst_points = np.array([dst_x, dst_z, dst_center]) 143 | transformation, rmsd_value = calculate_transformation_kabsch(src_points.transpose(), dst_points.transpose()) 144 | print("RMS error for calibration to real world system is :", rmsd_value, "m") 145 | 146 | return transformation 147 | 148 | 149 | def generate_extrinsics(aruco_data: dict) -> dict: 150 | base_camera = define_base_camera(aruco_data) 151 | base_camera_reference_points = zip(aruco_data[base_camera]['aruco'], aruco_data[base_camera]['centers']) 152 | relative_transformations = calculate_relative_transformations(aruco_data, base_camera) 153 | absolute_transformation = calculate_absolute_transformations(base_camera_reference_points) 154 | final_transformations = {} 155 | for k, v in relative_transformations.items(): 156 | final_transformations[k] = np.dot(absolute_transformation, v) 157 | # final_transformations = calculate_absolute_transformations(relative_transformations, aruco_data, base_camera) 158 | 159 | return final_transformations 160 | 161 | 162 | def main(): 163 | """Creates a camera setup file containing camera ids and extrinsic information as 4 x 4 matrix""" 164 | args = parse_args() 165 | aruco_data = read_aruco_data(args.data_dir) 166 | final_transformations = generate_extrinsics(aruco_data) 167 | for k, v in final_transformations.items(): 168 | final_transformations[k] = v.tolist() 169 | dump_dict_as_json(final_transformations, os.path.join(args.data_dir, 'camera_array.json')) 170 | 171 | 172 | if __name__ == '__main__': 173 | main() 174 | -------------------------------------------------------------------------------- /depth_camera_array/perform_measurement.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | 4 | import numpy as np 5 | from open3d import open3d as o3d 6 | 7 | from depth_camera_array import camera 8 | from depth_camera_array.utilities import load_json_to_dict, create_if_not_exists, dump_dict_as_json, DEFAULT_DATA_DIR 9 | 10 | 11 | def parse_args() -> argparse.Namespace: 12 | parser = argparse.ArgumentParser('Measures the scene') 13 | parser.add_argument('--bottom', type=float, default=0.0, help='Bottom of the measurement sphere in m') 14 | parser.add_argument('--height', type=float, default=1.8, help='Height of the measurement sphere in m') 15 | parser.add_argument('--radius', type=float, default=0.5, help='Radius of the measurement sphere in m') 16 | parser.add_argument('--data_dir', type=lambda item: create_if_not_exists(item), default=DEFAULT_DATA_DIR, 17 | help='Data location to load and dump config files', ) 18 | return parser.parse_args() 19 | 20 | 21 | def is_in_measurement_cylinder(point: np.array, bottom: float, height: float, radius: float) -> bool: 22 | is_beside_cylinder = np.linalg.norm([point[0], point[2]]) > radius 23 | is_above_cylinder = point[1] > (bottom + height) 24 | is_below_cylinder = point[1] < bottom 25 | return not (is_beside_cylinder or is_above_cylinder or is_below_cylinder) 26 | 27 | 28 | def remove_unnecessary_content(object_points, bottom: float, height: float, radius: float) -> np.array: 29 | """Removes points outside the defined measurement cylinder""" 30 | filtered_points = [] 31 | for item in object_points: 32 | if is_in_measurement_cylinder(item, bottom, height, radius): 33 | filtered_points.append(item.tolist()) 34 | return filtered_points 35 | 36 | 37 | def apply_transformation(object_points: np.array, extrinsic: np.array) -> np.array: 38 | tmp = object_points.transpose() 39 | points = np.ones((4, tmp.shape[1],)) 40 | points[:-1, :] = tmp 41 | transformed = extrinsic.dot(points) 42 | return transformed[:-1, :].transpose() 43 | 44 | 45 | def dump_to_ply(object_points: np.array, data_dir: str, camera_id: str): 46 | points = np.array(object_points) 47 | pcd = o3d.geometry.PointCloud() 48 | pcd.points = o3d.utility.Vector3dVector(points) 49 | o3d.io.write_point_cloud(os.path.join(data_dir, f'{camera_id}.ply'), pcd) 50 | 51 | 52 | def main(): 53 | args = parse_args() 54 | dictionary = load_json_to_dict(os.path.join(args.data_dir, 'camera_array.json')) 55 | all_connected_cams = camera.initialize_connected_cameras() 56 | for cam in all_connected_cams: 57 | trans_matrix = np.array(dictionary[cam.device_id]) 58 | frames = cam.poll_frames() 59 | object_points = cam.depth_frame_to_object_points(frames) 60 | object_points = apply_transformation(object_points, np.array(trans_matrix)) 61 | object_points = remove_unnecessary_content(object_points, args.bottom, args.height, args.radius) 62 | serial_points = np.array(object_points).transpose() 63 | dump_dict_as_json({cam.device_id: serial_points.tolist()}, 64 | os.path.join(args.data_dir, cam.device_id + '_object_points.json')) 65 | dump_to_ply(object_points, args.data_dir, cam.device_id) 66 | 67 | 68 | if __name__ == '__main__': 69 | main() 70 | -------------------------------------------------------------------------------- /depth_camera_array/utilities.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | 4 | DEFAULT_DATA_DIR = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', 'data')) 5 | 6 | 7 | def create_if_not_exists(path: str) -> str: 8 | print(path) 9 | if not os.path.exists(path): 10 | os.makedirs(path) 11 | return path 12 | 13 | 14 | def dump_dict_as_json(data: dict, path: str): 15 | if not os.path.exists(os.path.dirname(path)): 16 | os.makedirs(os.path.dirname(path)) 17 | with open(path, 'w') as f: 18 | json.dump(data, f) 19 | 20 | 21 | def load_json_to_dict(path) -> dict: 22 | with open(path, 'r') as f: 23 | data = json.load(f) 24 | return data 25 | -------------------------------------------------------------------------------- /perform_aruco_detection.sh: -------------------------------------------------------------------------------- 1 | python -m depth_camera_array.perform_aruco_detection $* -------------------------------------------------------------------------------- /perform_calibration.sh: -------------------------------------------------------------------------------- 1 | python -m depth_camera_array.perform_calibration $* -------------------------------------------------------------------------------- /perform_measurement.sh: -------------------------------------------------------------------------------- 1 | python -m depth_camera_array.perform_measurement $* -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pyrealsense2 2 | opencv-contrib-python 3 | numpy 4 | rmsd 5 | open3d 6 | matplotlib 7 | pytest -------------------------------------------------------------------------------- /run_tests.sh: -------------------------------------------------------------------------------- 1 | python -m tests.test_perform_aruco_detection 2 | python -m tests.test_perform_calibration -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import setuptools 2 | 3 | VERSION = '0.1.0' 4 | 5 | with open('README.md', 'r') as f: 6 | LONG_DESCRIPTION = f.read() 7 | 8 | with open('requirements.txt', 'r') as f: 9 | REQUIREMENTS = [ 10 | req.split('#')[0] for req in list(filter(lambda item: item not in ['', '\n'], f.read().split('\n')))] 11 | 12 | setuptools.setup( 13 | name='depth_camera_array', 14 | version=VERSION, 15 | author='Matthias Hirzle, Joshua Reimer, Stoyan Georgiev', 16 | author_email='hima1021@hs-karlsruhe.de, rejo1020@hs-karlsruhe.de, gest1019@hs-karlsruhe.de', 17 | description='Tool to handle depth camera arrays', 18 | long_description=LONG_DESCRIPTION, 19 | long_description_content_type='text/markdown', 20 | url='https://github.com/matthias-hirzle/DepthCamera-Array', 21 | packages=setuptools.find_packages(), 22 | install_requires=REQUIREMENTS, 23 | classifiers=[ 24 | 'Programming Language :: Python :: 3', 25 | 'Operating System :: OS Independent', 26 | ], 27 | ) 28 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matthias-hirzle/depth_camera_array/ecf1fdb4066f062b27939227e0348507c3872f5e/tests/__init__.py -------------------------------------------------------------------------------- /tests/test_perform_aruco_detection.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from unittest import mock 3 | 4 | import numpy as np 5 | 6 | from depth_camera_array.perform_aruco_detection import detect_aruco_targets, determine_aruco_center 7 | 8 | 9 | def detectMarkers_patch(*args): 10 | return ( 11 | [ 12 | np.array([[[382., 405.], [601., 405.], [601., 624.], [382., 624.]]], dtype=np.float32), 13 | np.array([[[120., 405.], [338., 405.], [338., 624.], [120., 624.]]], dtype=np.float32), 14 | np.array([[[382., 102.], [601., 102.], [601., 321.], [382., 321.]]], dtype=np.float32), 15 | np.array([[[120., 102.], [338., 102.], [338., 321.], [120., 321.]]], dtype=np.float32) 16 | ], 17 | np.array([[7], [6], [5], [4]], dtype=np.int32), 18 | [ 19 | np.array([[[437., 553.], [485., 555.], [483., 601.], [436., 599.]]], dtype=np.float32), 20 | np.array([[[244., 500.], [275., 499.], [276., 529.], [245., 529.]]], dtype=np.float32), 21 | np.array([[[437., 250.], [512., 249.], [515., 296.], [438., 298.]]], dtype=np.float32), 22 | np.array([[[532., 124.], [578., 126.], [576., 173.], [530., 172.]]], dtype=np.float32) 23 | ] 24 | ) 25 | 26 | 27 | EXPECTED_DETECT_ARUCO_TARGETS = ( 28 | [ 29 | np.array([[382., 405.], [601., 405.], [601., 624.], [382., 624.]], dtype=np.float32), 30 | np.array([[120., 405.], [338., 405.], [338., 624.], [120., 624.]], dtype=np.float32), 31 | np.array([[382., 102.], [601., 102.], [601., 321.], [382., 321.]], dtype=np.float32), 32 | np.array([[120., 102.], [338., 102.], [338., 321.], [120., 321.]], dtype=np.float32) 33 | ], [7, 6, 5, 4]) 34 | 35 | 36 | # aruco_corners, aruco_ids, _ = aruco.detectMarkers(rgb_image, aruco.Dictionary_get(aruco.DICT_5X5_250)) 37 | class MyTestCase(unittest.TestCase): 38 | @mock.patch('cv2.aruco.detectMarkers', side_effect=detectMarkers_patch) 39 | def test_detect_aruco_targets(self, *args): 40 | aruco_corners, aruco_ids = detect_aruco_targets(None) 41 | self.assertListEqual(EXPECTED_DETECT_ARUCO_TARGETS[1], aruco_ids) 42 | np.testing.assert_array_equal(EXPECTED_DETECT_ARUCO_TARGETS[0], aruco_corners) 43 | 44 | def test_determine_aruco_center(self): 45 | expected = np.array([10, 10]) 46 | corners = np.array([[12, 11], [11, 8], [8, 9], [9, 12]]) 47 | result = determine_aruco_center(corners) 48 | np.testing.assert_array_equal(expected, result) 49 | 50 | 51 | if __name__ == '__main__': 52 | unittest.main() 53 | -------------------------------------------------------------------------------- /tests/test_perform_calibration.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | 3 | import numpy as np 4 | 5 | from depth_camera_array.perform_calibration import define_base_camera, generate_extrinsics 6 | 7 | WORLD_POINTS = { 8 | 1: [0.3, 0., 0.], 9 | 2: [0., 0., 0.], 10 | 3: [0., 0., 0.2], 11 | 4: [-2.3, 1.3, 4.5], 12 | 5: [0.5, 0.4, -1.4], 13 | 6: [-0.2, 2.8, 3.1], 14 | 7: [0.9, 1.45, -2.4], 15 | 8: [0.6, 3.1, 1.2], 16 | 9: [-0.3, 2.9, -4.6], 17 | 10: [0.1, 1.4, 2.3] 18 | } 19 | 20 | REFERENCE_POINTS = { 21 | 'cam_1': { 22 | 'aruco': [1, 4, 5, 6, 9], 23 | 'centers': [ 24 | [1.061972682385508, 3.2283564821658843, -2.1353988575059306], 25 | [-4.236616239342009, 3.3873650594887086, -2.9097836240079937], 26 | [2.3262363837580824, 3.8726991118254275, -2.518099604815869], 27 | [-2.5122170311604655, 5.3891994114953405, -1.6295076706534375], 28 | [4.438154946856648, 6.704606097666394, -4.67450927248734]] 29 | }, 30 | 'cam_2': { 31 | 'aruco': [1, 4, 2, 6, 9, 10, 7, 3], 32 | 'centers': [ 33 | [3.5576816164790634, 0.8321197757585809, -3.0822845790252007], 34 | [2.6957552700437475, -4.246237276732767, -1.6100845789680007], 35 | [3.4320312610196986, 0.6430086757115304, -3.278368692957242], 36 | [1.8712453917889231, -1.7796268179922174, -0.24772769601213707], 37 | [-0.5527918006739907, 4.131375624573358, -4.548203681099345], 38 | [2.9590790495652053, -1.023749636652921, -1.214820057743613], 39 | [1.849572798344455, 3.126616284601869, -3.282659174148778], 40 | [3.49167761289205, 0.487938107122034, -3.1670339211919845]] 41 | }, 42 | 'cam_3': { 43 | 'aruco': [5, 9, 3, 2, 10, 1], 44 | 'centers': [ 45 | [-0.9460346811171982, -3.334183141656747, -0.623089955219494], 46 | [0.49480811807800995, -2.2370559485313435, 3.0985070035474216], 47 | [-0.48278414929457447, -4.003546270855325, -2.142086973346046], 48 | [-0.5379103614972061, -3.9998323373302718, -1.9498701493110786], 49 | [1.053123311618073, -3.04814283719597, -3.9050936547660444], 50 | [-0.7260785710409359, -3.7735812322718836, -2.008206782126227]] 51 | } 52 | } 53 | CAM_TO_REAL = { 54 | 'cam_1': np.array([ 55 | [0.36063609, 0.14959066, 0.92063253, 1.4], 56 | [-0.20461186, 0.97569952, - 0.07838645, -3.1], 57 | [-0.90998659, - 0.16010336, 0.38248048, 2.3], 58 | [0., 0., 0., 1.] 59 | ]), 60 | 'cam_2': np.array([ 61 | [0.41883452, 0.63037033, 0.65361372, 0.3], 62 | [-0.85769194, 0.03822586, 0.51274099, 4.6], 63 | [0.29823177, -0.77535284, 0.55667386, 1.3], 64 | [-0., 0., 0., 1.] 65 | ]), 66 | 'cam_3': np.array([ 67 | [-0.62722736, 0.75417035, -0.19445544, 2.3], 68 | [0.72843212, 0.65641622, 0.19622537, 3.4], 69 | [0.27563106, -0.01856967, -0.96108412, -1.8], 70 | [0., 0., 0., 1.] 71 | ]) 72 | } 73 | 74 | 75 | class MyTestCase(unittest.TestCase): 76 | def test_generate_extrinsics(self): 77 | results = generate_extrinsics(REFERENCE_POINTS) 78 | for cam, expected in CAM_TO_REAL.items(): 79 | np.testing.assert_array_almost_equal(results[cam], expected) 80 | 81 | def test_define_base_camera(self): 82 | expected = 'cam_2' 83 | result = define_base_camera(REFERENCE_POINTS) 84 | self.assertEqual(expected, result) 85 | 86 | 87 | if __name__ == '__main__': 88 | unittest.main() 89 | --------------------------------------------------------------------------------