├── .gitignore ├── CMakeLists.txt ├── LICENSE ├── README.md ├── extra_window.png ├── launch └── calib.launch ├── package.xml └── scripts ├── camera_calibration_frontend ├── __init__.py └── calibrator.py ├── cameracalibrator.py ├── cameracheck.py ├── kalibr_camera_focus.py ├── publish_calib_file.py └── tarfile_calibration.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | 52 | # Django stuff: 53 | *.log 54 | 55 | # Sphinx documentation 56 | docs/_build/ 57 | 58 | # PyBuilder 59 | target/ 60 | 61 | #Ipython Notebook 62 | .ipynb_checkpoints 63 | -------------------------------------------------------------------------------- /CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 2.8) 2 | project(camera_calibration_frontend) 3 | 4 | find_package(catkin REQUIRED) 5 | catkin_package() -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2016, LIU Tianbo 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without 5 | modification, are permitted provided that the following conditions are met: 6 | 7 | * Redistributions of source code must retain the above copyright notice, this 8 | list of conditions and the following disclaimer. 9 | 10 | * Redistributions in binary form must reproduce the above copyright notice, 11 | this list of conditions and the following disclaimer in the documentation 12 | and/or other materials provided with the distribution. 13 | 14 | * Neither the name of camera_calibration_frontend nor the names of its 15 | contributors may be used to endorse or promote products derived from 16 | this software without specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 19 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 20 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 22 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 23 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 24 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 25 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 26 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 27 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Camera Calibration Frontend ## 2 | 3 | STEREO NOT TESTED YET! 4 | 5 | Modified from [camera_calibration](http://wiki.ros.org/camera_calibration) 6 | 7 | This modified version can save `calibrationdata.tar.gz` **at any time**, regardless of calibrated or not. You don't need to wait for the long calibration time when you just need the raw images. 8 | 9 | Also, it will pop up a window showing where the saved chessboards are. 10 | 11 | For example: 12 | 13 | ![extra_window](extra_window.png?raw=true) 14 | 15 | There is also another simple and naive tool: publish_calib_file.py, for publishing all png files in the current folder. 16 | 17 | There is also a python script `kalibr_camera_focus` from https://github.com/ethz-asl/kalibr/blob/master/aslam_offline_calibration/kalibr/python/kalibr_camera_focus to help adjusting focus of the camera. 18 | -------------------------------------------------------------------------------- /extra_window.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/groundmelon/camera_calibration_frontend/127a6c4f631a78bbd966e9ac4e3309e4a6e16bb7/extra_window.png -------------------------------------------------------------------------------- /launch/calib.launch: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 9 | 10 | 11 | 12 | 14 | 15 | 16 | 17 | 18 | -------------------------------------------------------------------------------- /package.xml: -------------------------------------------------------------------------------- 1 | 2 | camera_calibration_frontend 3 | 1.0.0 4 | 5 | camera_calibration_frontend is a frontend for getting calibration data. 6 | Modified from http://wiki.ros.org/camera_calibration 7 | 8 | LIU Tianbo 9 | James Bowman 10 | Patrick Mihelich 11 | Liu Tianbo 12 | 13 | catkin 14 | 15 | cv_bridge 16 | image_geometry 17 | message_filters 18 | rospy 19 | std_srvs 20 | sensor_msgs 21 | 22 | BSD 23 | 24 | 25 | -------------------------------------------------------------------------------- /scripts/camera_calibration_frontend/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/groundmelon/camera_calibration_frontend/127a6c4f631a78bbd966e9ac4e3309e4a6e16bb7/scripts/camera_calibration_frontend/__init__.py -------------------------------------------------------------------------------- /scripts/camera_calibration_frontend/calibrator.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # Software License Agreement (BSD License) 4 | # 5 | # Copyright (c) 2009, Willow Garage, Inc. 6 | # All rights reserved. 7 | # 8 | # Redistribution and use in source and binary forms, with or without 9 | # modification, are permitted provided that the following conditions 10 | # are met: 11 | # 12 | # * Redistributions of source code must retain the above copyright 13 | # notice, this list of conditions and the following disclaimer. 14 | # * Redistributions in binary form must reproduce the above 15 | # copyright notice, this list of conditions and the following 16 | # disclaimer in the documentation and/or other materials provided 17 | # with the distribution. 18 | # * Neither the name of the Willow Garage nor the names of its 19 | # contributors may be used to endorse or promote products derived 20 | # from this software without specific prior written permission. 21 | # 22 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 23 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 24 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 25 | # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE 26 | # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, 27 | # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 28 | # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 | # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 30 | # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 31 | # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN 32 | # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 33 | # POSSIBILITY OF SUCH DAMAGE. 34 | 35 | try: 36 | from cStringIO import StringIO 37 | except ImportError: 38 | from io import StringIO 39 | import cv2 40 | import cv_bridge 41 | import image_geometry 42 | import math 43 | import numpy.linalg 44 | import pickle 45 | import random 46 | import sensor_msgs.msg 47 | import tarfile 48 | import time 49 | from distutils.version import LooseVersion 50 | 51 | import numpy 52 | 53 | # Supported calibration patterns 54 | class Patterns: 55 | Chessboard, Circles, ACircles = list(range(3)) 56 | 57 | class CalibrationException(Exception): 58 | pass 59 | 60 | # TODO: Make pattern per-board? 61 | class ChessboardInfo(object): 62 | def __init__(self, n_cols = 0, n_rows = 0, dim = 0.0): 63 | self.n_cols = n_cols 64 | self.n_rows = n_rows 65 | self.dim = dim 66 | 67 | # Make all private!!!!! 68 | def lmin(seq1, seq2): 69 | """ Pairwise minimum of two sequences """ 70 | return [min(a, b) for (a, b) in zip(seq1, seq2)] 71 | 72 | def lmax(seq1, seq2): 73 | """ Pairwise maximum of two sequences """ 74 | return [max(a, b) for (a, b) in zip(seq1, seq2)] 75 | 76 | def _pdist(p1, p2): 77 | """ 78 | Distance bwt two points. p1 = (x, y), p2 = (x, y) 79 | """ 80 | return math.sqrt(math.pow(p1[0] - p2[0], 2) + math.pow(p1[1] - p2[1], 2)) 81 | 82 | def _get_outside_corners(corners, board): 83 | """ 84 | Return the four corners of the board as a whole, as (up_left, up_right, down_right, down_left). 85 | """ 86 | xdim = board.n_cols 87 | ydim = board.n_rows 88 | 89 | if corners.shape[1] * corners.shape[0] != xdim * ydim: 90 | raise Exception("Invalid number of corners! %d corners. X: %d, Y: %d" % (corners.shape[1] * corners.shape[0], 91 | xdim, ydim)) 92 | 93 | up_left = corners[0,0] 94 | up_right = corners[xdim - 1,0] 95 | down_right = corners[-1,0] 96 | down_left = corners[-xdim,0] 97 | 98 | return (up_left, up_right, down_right, down_left) 99 | 100 | def _get_skew(corners, board): 101 | """ 102 | Get skew for given checkerboard detection. 103 | Scaled to [0,1], which 0 = no skew, 1 = high skew 104 | Skew is proportional to the divergence of three outside corners from 90 degrees. 105 | """ 106 | # TODO Using three nearby interior corners might be more robust, outside corners occasionally 107 | # get mis-detected 108 | up_left, up_right, down_right, _ = _get_outside_corners(corners, board) 109 | 110 | def angle(a, b, c): 111 | """ 112 | Return angle between lines ab, bc 113 | """ 114 | ab = a - b 115 | cb = c - b 116 | return math.acos(numpy.dot(ab,cb) / (numpy.linalg.norm(ab) * numpy.linalg.norm(cb))) 117 | 118 | skew = min(1.0, 2. * abs((math.pi / 2.) - angle(up_left, up_right, down_right))) 119 | return skew 120 | 121 | def _get_area(corners, board): 122 | """ 123 | Get 2d image area of the detected checkerboard. 124 | The projected checkerboard is assumed to be a convex quadrilateral, and the area computed as 125 | |p X q|/2; see http://mathworld.wolfram.com/Quadrilateral.html. 126 | """ 127 | (up_left, up_right, down_right, down_left) = _get_outside_corners(corners, board) 128 | a = up_right - up_left 129 | b = down_right - up_right 130 | c = down_left - down_right 131 | p = b + c 132 | q = a + b 133 | return abs(p[0]*q[1] - p[1]*q[0]) / 2. 134 | 135 | def _get_corners(img, board, refine = True, checkerboard_flags=0): 136 | """ 137 | Get corners for a particular chessboard for an image 138 | """ 139 | h = img.shape[0] 140 | w = img.shape[1] 141 | if len(img.shape) == 3 and img.shape[2] == 3: 142 | mono = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) 143 | else: 144 | mono = img 145 | (ok, corners) = cv2.findChessboardCorners(mono, (board.n_cols, board.n_rows), flags = cv2.CALIB_CB_ADAPTIVE_THRESH | 146 | cv2.CALIB_CB_NORMALIZE_IMAGE | checkerboard_flags) 147 | if not ok: 148 | return (ok, corners) 149 | 150 | # If any corners are within BORDER pixels of the screen edge, reject the detection by setting ok to false 151 | # NOTE: This may cause problems with very low-resolution cameras, where 8 pixels is a non-negligible fraction 152 | # of the image size. See http://answers.ros.org/question/3155/how-can-i-calibrate-low-resolution-cameras 153 | BORDER = 8 154 | if not all([(BORDER < corners[i, 0, 0] < (w - BORDER)) and (BORDER < corners[i, 0, 1] < (h - BORDER)) for i in range(corners.shape[0])]): 155 | ok = False 156 | 157 | if refine and ok: 158 | # Use a radius of half the minimum distance between corners. This should be large enough to snap to the 159 | # correct corner, but not so large as to include a wrong corner in the search window. 160 | min_distance = float("inf") 161 | for row in range(board.n_rows): 162 | for col in range(board.n_cols - 1): 163 | index = row*board.n_rows + col 164 | min_distance = min(min_distance, _pdist(corners[index, 0], corners[index + 1, 0])) 165 | for row in range(board.n_rows - 1): 166 | for col in range(board.n_cols): 167 | index = row*board.n_rows + col 168 | min_distance = min(min_distance, _pdist(corners[index, 0], corners[index + board.n_cols, 0])) 169 | radius = int(math.ceil(min_distance * 0.5)) 170 | cv2.cornerSubPix(mono, corners, (radius,radius), (-1,-1), 171 | ( cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1 )) 172 | 173 | return (ok, corners) 174 | 175 | def _get_circles(img, board, pattern): 176 | """ 177 | Get circle centers for a symmetric or asymmetric grid 178 | """ 179 | h = img.shape[0] 180 | w = img.shape[1] 181 | if len(img.shape) == 3 and img.shape[2] == 3: 182 | mono = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) 183 | else: 184 | mono = img 185 | 186 | flag = cv2.CALIB_CB_SYMMETRIC_GRID 187 | if pattern == Patterns.ACircles: 188 | flag = cv2.CALIB_CB_ASYMMETRIC_GRID 189 | mono_arr = numpy.array(mono) 190 | (ok, corners) = cv2.findCirclesGrid(mono_arr, (board.n_cols, board.n_rows), flags=flag) 191 | 192 | # In symmetric case, findCirclesGrid does not detect the target if it's turned sideways. So we try 193 | # again with dimensions swapped - not so efficient. 194 | # TODO Better to add as second board? Corner ordering will change. 195 | if not ok and pattern == Patterns.Circles: 196 | (ok, corners) = cv2.findCirclesGrid(mono_arr, (board.n_rows, board.n_cols), flags=flag) 197 | 198 | return (ok, corners) 199 | 200 | 201 | # TODO self.size needs to come from CameraInfo, full resolution 202 | class Calibrator(object): 203 | """ 204 | Base class for calibration system 205 | """ 206 | def __init__(self, boards, flags=0, pattern=Patterns.Chessboard, name='', checkerboard_flags=cv2.CALIB_CB_FAST_CHECK): 207 | # Ordering the dimensions for the different detectors is actually a minefield... 208 | if pattern == Patterns.Chessboard: 209 | # Make sure n_cols > n_rows to agree with OpenCV CB detector output 210 | self._boards = [ChessboardInfo(max(i.n_cols, i.n_rows), min(i.n_cols, i.n_rows), i.dim) for i in boards] 211 | elif pattern == Patterns.ACircles: 212 | # 7x4 and 4x7 are actually different patterns. Assume square-ish pattern, so n_rows > n_cols. 213 | self._boards = [ChessboardInfo(min(i.n_cols, i.n_rows), max(i.n_cols, i.n_rows), i.dim) for i in boards] 214 | elif pattern == Patterns.Circles: 215 | # We end up having to check both ways anyway 216 | self._boards = boards 217 | 218 | # Set to true after we perform calibration 219 | self.calibrated = False 220 | self.calib_flags = flags 221 | self.checkerboard_flags = checkerboard_flags 222 | self.pattern = pattern 223 | self.br = cv_bridge.CvBridge() 224 | 225 | # self.db is list of (parameters, image) samples for use in calibration. parameters has form 226 | # (X, Y, size, skew) all normalized to [0,1], to keep track of what sort of samples we've taken 227 | # and ensure enough variety. 228 | self.db = [] 229 | # For each db sample, we also record the detected corners. 230 | self.good_corners = [] 231 | # Set to true when we have sufficiently varied samples to calibrate 232 | self.goodenough = False 233 | self.param_ranges = [0.7, 0.7, 0.4, 0.5] 234 | self.name = name 235 | 236 | self.accu_image = None 237 | 238 | def mkgray(self, msg): 239 | """ 240 | Convert a message into a 8-bit 1 channel monochrome OpenCV image 241 | """ 242 | # as cv_bridge automatically scales, we need to remove that behavior 243 | # TODO: get a Python API in cv_bridge to check for the image depth. 244 | if self.br.encoding_to_dtype_with_channels(msg.encoding)[0] in ['uint16', 'int16']: 245 | mono16 = self.br.imgmsg_to_cv2(msg, '16UC1') 246 | mono8 = numpy.array(numpy.clip(mono16, 0, 255), dtype=numpy.uint8) 247 | return mono8 248 | elif 'FC1' in msg.encoding: 249 | # floating point image handling 250 | img = self.br.imgmsg_to_cv2(msg, "passthrough") 251 | _, max_val, _, _ = cv2.minMaxLoc(img) 252 | if max_val > 0: 253 | scale = 255.0 / max_val 254 | mono_img = (img * scale).astype(np.uint8) 255 | else: 256 | mono_img = img.astype(np.uint8) 257 | return mono_img 258 | else: 259 | return self.br.imgmsg_to_cv2(msg, "mono8") 260 | 261 | def get_parameters(self, corners, board, size): 262 | """ 263 | Return list of parameters [X, Y, size, skew] describing the checkerboard view. 264 | """ 265 | (width, height) = size 266 | Xs = corners[:,:,0] 267 | Ys = corners[:,:,1] 268 | area = _get_area(corners, board) 269 | border = math.sqrt(area) 270 | # For X and Y, we "shrink" the image all around by approx. half the board size. 271 | # Otherwise large boards are penalized because you can't get much X/Y variation. 272 | p_x = min(1.0, max(0.0, (numpy.mean(Xs) - border / 2) / (width - border))) 273 | p_y = min(1.0, max(0.0, (numpy.mean(Ys) - border / 2) / (height - border))) 274 | p_size = math.sqrt(area / (width * height)) 275 | skew = _get_skew(corners, board) 276 | params = [p_x, p_y, p_size, skew] 277 | return params 278 | 279 | def is_good_sample(self, params): 280 | """ 281 | Returns true if the checkerboard detection described by params should be added to the database. 282 | """ 283 | if not self.db: 284 | return True 285 | 286 | def param_distance(p1, p2): 287 | return sum([abs(a-b) for (a,b) in zip(p1, p2)]) 288 | 289 | db_params = [sample[0] for sample in self.db] 290 | d = min([param_distance(params, p) for p in db_params]) 291 | #print "d = %.3f" % d #DEBUG 292 | # TODO What's a good threshold here? Should it be configurable? 293 | return d > 0.2 294 | 295 | _param_names = ["X", "Y", "Size", "Skew"] 296 | 297 | def compute_goodenough(self): 298 | if not self.db: 299 | return None 300 | 301 | # Find range of checkerboard poses covered by samples in database 302 | all_params = [sample[0] for sample in self.db] 303 | min_params = all_params[0] 304 | max_params = all_params[0] 305 | for params in all_params[1:]: 306 | min_params = lmin(min_params, params) 307 | max_params = lmax(max_params, params) 308 | # Don't reward small size or skew 309 | min_params = [min_params[0], min_params[1], 0., 0.] 310 | 311 | # For each parameter, judge how much progress has been made toward adequate variation 312 | progress = [min((hi - lo) / r, 1.0) for (lo, hi, r) in zip(min_params, max_params, self.param_ranges)] 313 | # If we have lots of samples, allow calibration even if not all parameters are green 314 | # TODO Awkward that we update self.goodenough instead of returning it 315 | self.goodenough = (len(self.db) >= 40) or all([p == 1.0 for p in progress]) 316 | 317 | return list(zip(self._param_names, min_params, max_params, progress)) 318 | 319 | def mk_object_points(self, boards, use_board_size = False): 320 | opts = [] 321 | for i, b in enumerate(boards): 322 | num_pts = b.n_cols * b.n_rows 323 | opts_loc = numpy.zeros((num_pts, 1, 3), numpy.float32) 324 | for j in range(num_pts): 325 | opts_loc[j, 0, 0] = (j / b.n_cols) 326 | if self.pattern == Patterns.ACircles: 327 | opts_loc[j, 0, 1] = 2*(j % b.n_cols) + (opts_loc[j, 0, 0] % 2) 328 | else: 329 | opts_loc[j, 0, 1] = (j % b.n_cols) 330 | opts_loc[j, 0, 2] = 0 331 | if use_board_size: 332 | opts_loc[j, 0, :] = opts_loc[j, 0, :] * b.dim 333 | opts.append(opts_loc) 334 | return opts 335 | 336 | def get_corners(self, img, refine = True): 337 | """ 338 | Use cvFindChessboardCorners to find corners of chessboard in image. 339 | 340 | Check all boards. Return corners for first chessboard that it detects 341 | if given multiple size chessboards. 342 | 343 | Returns (ok, corners, board) 344 | """ 345 | 346 | for b in self._boards: 347 | if self.pattern == Patterns.Chessboard: 348 | (ok, corners) = _get_corners(img, b, refine, self.checkerboard_flags) 349 | else: 350 | (ok, corners) = _get_circles(img, b, self.pattern) 351 | if ok: 352 | return (ok, corners, b) 353 | return (False, None, None) 354 | 355 | def downsample_and_detect(self, img): 356 | """ 357 | Downsample the input image to approximately VGA resolution and detect the 358 | calibration target corners in the full-size image. 359 | 360 | Combines these apparently orthogonal duties as an optimization. Checkerboard 361 | detection is too expensive on large images, so it's better to do detection on 362 | the smaller display image and scale the corners back up to the correct size. 363 | 364 | Returns (scrib, corners, downsampled_corners, board, (x_scale, y_scale)). 365 | """ 366 | # Scale the input image down to ~VGA size 367 | height = img.shape[0] 368 | width = img.shape[1] 369 | scale = math.sqrt( (width*height) / (640.*480.) ) 370 | if scale > 1.0: 371 | scrib = cv2.resize(img, (int(width / scale), int(height / scale))) 372 | else: 373 | scrib = img 374 | # Due to rounding, actual horizontal/vertical scaling may differ slightly 375 | x_scale = float(width) / scrib.shape[1] 376 | y_scale = float(height) / scrib.shape[0] 377 | 378 | if self.pattern == Patterns.Chessboard: 379 | # Detect checkerboard 380 | (ok, downsampled_corners, board) = self.get_corners(scrib, refine = True) 381 | 382 | # Scale corners back to full size image 383 | corners = None 384 | if ok: 385 | if scale > 1.0: 386 | # Refine up-scaled corners in the original full-res image 387 | # TODO Does this really make a difference in practice? 388 | corners_unrefined = downsampled_corners.copy() 389 | corners_unrefined[:, :, 0] *= x_scale 390 | corners_unrefined[:, :, 1] *= y_scale 391 | radius = int(math.ceil(scale)) 392 | if len(img.shape) == 3 and img.shape[2] == 3: 393 | mono = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) 394 | else: 395 | mono = img 396 | cv2.cornerSubPix(mono, corners_unrefined, (radius,radius), (-1,-1), 397 | ( cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1 )) 398 | corners = corners_unrefined 399 | else: 400 | corners = downsampled_corners 401 | else: 402 | # Circle grid detection is fast even on large images 403 | (ok, corners, board) = self.get_corners(img) 404 | # Scale corners to downsampled image for display 405 | downsampled_corners = None 406 | if ok: 407 | if scale > 1.0: 408 | downsampled_corners = corners.copy() 409 | downsampled_corners[:,:,0] /= x_scale 410 | downsampled_corners[:,:,1] /= y_scale 411 | else: 412 | downsampled_corners = corners 413 | 414 | return (scrib, corners, downsampled_corners, board, (x_scale, y_scale)) 415 | 416 | 417 | def lrmsg(self, d, k, r, p): 418 | """ Used by :meth:`as_message`. Return a CameraInfo message for the given calibration matrices """ 419 | msg = sensor_msgs.msg.CameraInfo() 420 | (msg.width, msg.height) = self.size 421 | if d.size > 5: 422 | msg.distortion_model = "rational_polynomial" 423 | else: 424 | msg.distortion_model = "plumb_bob" 425 | msg.D = numpy.ravel(d).copy().tolist() 426 | msg.K = numpy.ravel(k).copy().tolist() 427 | msg.R = numpy.ravel(r).copy().tolist() 428 | msg.P = numpy.ravel(p).copy().tolist() 429 | return msg 430 | 431 | def lrreport(self, d, k, r, p): 432 | print("D = ", numpy.ravel(d).tolist()) 433 | print("K = ", numpy.ravel(k).tolist()) 434 | print("R = ", numpy.ravel(r).tolist()) 435 | print("P = ", numpy.ravel(p).tolist()) 436 | 437 | def lrost(self, name, d, k, r, p): 438 | calmessage = ( 439 | "# oST version 5.0 parameters\n" 440 | + "\n" 441 | + "\n" 442 | + "[image]\n" 443 | + "\n" 444 | + "width\n" 445 | + str(self.size[0]) + "\n" 446 | + "\n" 447 | + "height\n" 448 | + str(self.size[1]) + "\n" 449 | + "\n" 450 | + "[%s]" % name + "\n" 451 | + "\n" 452 | + "camera matrix\n" 453 | + " ".join(["%8f" % k[0,i] for i in range(3)]) + "\n" 454 | + " ".join(["%8f" % k[1,i] for i in range(3)]) + "\n" 455 | + " ".join(["%8f" % k[2,i] for i in range(3)]) + "\n" 456 | + "\n" 457 | + "distortion\n" 458 | + " ".join(["%8f" % d[i,0] for i in range(d.shape[0])]) + "\n" 459 | + "\n" 460 | + "rectification\n" 461 | + " ".join(["%8f" % r[0,i] for i in range(3)]) + "\n" 462 | + " ".join(["%8f" % r[1,i] for i in range(3)]) + "\n" 463 | + " ".join(["%8f" % r[2,i] for i in range(3)]) + "\n" 464 | + "\n" 465 | + "projection\n" 466 | + " ".join(["%8f" % p[0,i] for i in range(4)]) + "\n" 467 | + " ".join(["%8f" % p[1,i] for i in range(4)]) + "\n" 468 | + " ".join(["%8f" % p[2,i] for i in range(4)]) + "\n" 469 | + "\n") 470 | assert len(calmessage) < 525, "Calibration info must be less than 525 bytes" 471 | return calmessage 472 | 473 | def lryaml(self, name, d, k, r, p): 474 | calmessage = ("" 475 | + "image_width: " + str(self.size[0]) + "\n" 476 | + "image_height: " + str(self.size[1]) + "\n" 477 | + "camera_name: " + name + "\n" 478 | + "camera_matrix:\n" 479 | + " rows: 3\n" 480 | + " cols: 3\n" 481 | + " data: [" + ", ".join(["%8f" % i for i in k.reshape(1,9)[0]]) + "]\n" 482 | + "distortion_model: " + ("rational_polynomial" if d.size > 5 else "plumb_bob") + "\n" 483 | + "distortion_coefficients:\n" 484 | + " rows: 1\n" 485 | + " cols: 5\n" 486 | + " data: [" + ", ".join(["%8f" % d[i,0] for i in range(d.shape[0])]) + "]\n" 487 | + "rectification_matrix:\n" 488 | + " rows: 3\n" 489 | + " cols: 3\n" 490 | + " data: [" + ", ".join(["%8f" % i for i in r.reshape(1,9)[0]]) + "]\n" 491 | + "projection_matrix:\n" 492 | + " rows: 3\n" 493 | + " cols: 4\n" 494 | + " data: [" + ", ".join(["%8f" % i for i in p.reshape(1,12)[0]]) + "]\n" 495 | + "") 496 | return calmessage 497 | 498 | def do_save(self): 499 | filename = '/tmp/calibrationdata.tar.gz' 500 | tf = tarfile.open(filename, 'w:gz') 501 | self.do_tarfile_save(tf) # Must be overridden in subclasses 502 | tf.close() 503 | print(("Wrote calibration data to", filename)) 504 | 505 | def image_from_archive(archive, name): 506 | """ 507 | Load image PGM file from tar archive. 508 | 509 | Used for tarfile loading and unit test. 510 | """ 511 | member = archive.getmember(name) 512 | imagefiledata = numpy.fromstring(archive.extractfile(member).read(-1), numpy.uint8) 513 | imagefiledata.resize((1, imagefiledata.size)) 514 | return cv2.imdecode(imagefiledata, cv2.IMREAD_COLOR) 515 | 516 | class ImageDrawable(object): 517 | """ 518 | Passed to CalibrationNode after image handled. Allows plotting of images 519 | with detected corner points 520 | """ 521 | def __init__(self): 522 | self.params = None 523 | 524 | class MonoDrawable(ImageDrawable): 525 | def __init__(self): 526 | ImageDrawable.__init__(self) 527 | self.scrib = None 528 | self.linear_error = -1.0 529 | self.extra_image = None 530 | 531 | 532 | class StereoDrawable(ImageDrawable): 533 | def __init__(self): 534 | ImageDrawable.__init__(self) 535 | self.lscrib = None 536 | self.rscrib = None 537 | self.epierror = -1 538 | self.dim = -1 539 | self.extra_image = None 540 | 541 | 542 | class MonoCalibrator(Calibrator): 543 | """ 544 | Calibration class for monocular cameras:: 545 | 546 | images = [cv2.imread("mono%d.png") for i in range(8)] 547 | mc = MonoCalibrator() 548 | mc.cal(images) 549 | print mc.as_message() 550 | """ 551 | 552 | is_mono = True # TODO Could get rid of is_mono 553 | 554 | def __init__(self, *args, **kwargs): 555 | if 'name' not in kwargs: 556 | kwargs['name'] = 'narrow_stereo/left' 557 | super(MonoCalibrator, self).__init__(*args, **kwargs) 558 | 559 | def cal(self, images): 560 | """ 561 | Calibrate camera from given images 562 | """ 563 | goodcorners = self.collect_corners(images) 564 | self.cal_fromcorners(goodcorners) 565 | self.calibrated = True 566 | 567 | def collect_corners(self, images): 568 | """ 569 | :param images: source images containing chessboards 570 | :type images: list of :class:`cvMat` 571 | 572 | Find chessboards in all images. 573 | 574 | Return [ (corners, ChessboardInfo) ] 575 | """ 576 | self.size = (images[0].shape[1], images[0].shape[0]) 577 | corners = [self.get_corners(i) for i in images] 578 | 579 | goodcorners = [(co, b) for (ok, co, b) in corners if ok] 580 | if not goodcorners: 581 | raise CalibrationException("No corners found in images!") 582 | return goodcorners 583 | 584 | def cal_fromcorners(self, good): 585 | """ 586 | :param good: Good corner positions and boards 587 | :type good: [(corners, ChessboardInfo)] 588 | 589 | 590 | """ 591 | boards = [ b for (_, b) in good ] 592 | 593 | ipts = [ points for (points, _) in good ] 594 | opts = self.mk_object_points(boards) 595 | 596 | self.intrinsics = numpy.zeros((3, 3), numpy.float64) 597 | if self.calib_flags & cv2.CALIB_RATIONAL_MODEL: 598 | self.distortion = numpy.zeros((8, 1), numpy.float64) # rational polynomial 599 | else: 600 | self.distortion = numpy.zeros((5, 1), numpy.float64) # plumb bob 601 | # If FIX_ASPECT_RATIO flag set, enforce focal lengths have 1/1 ratio 602 | self.intrinsics[0,0] = 1.0 603 | self.intrinsics[1,1] = 1.0 604 | cv2.calibrateCamera( 605 | opts, ipts, 606 | self.size, self.intrinsics, 607 | self.distortion, 608 | flags = self.calib_flags) 609 | 610 | # R is identity matrix for monocular calibration 611 | self.R = numpy.eye(3, dtype=numpy.float64) 612 | self.P = numpy.zeros((3, 4), dtype=numpy.float64) 613 | 614 | self.set_alpha(0.0) 615 | 616 | def set_alpha(self, a): 617 | """ 618 | Set the alpha value for the calibrated camera solution. The alpha 619 | value is a zoom, and ranges from 0 (zoomed in, all pixels in 620 | calibrated image are valid) to 1 (zoomed out, all pixels in 621 | original image are in calibrated image). 622 | """ 623 | 624 | # NOTE: Prior to Electric, this code was broken such that we never actually saved the new 625 | # camera matrix. In effect, this enforced P = [K|0] for monocular cameras. 626 | # TODO: Verify that OpenCV #1199 gets applied (improved GetOptimalNewCameraMatrix) 627 | ncm, _ = cv2.getOptimalNewCameraMatrix(self.intrinsics, self.distortion, self.size, a) 628 | for j in range(3): 629 | for i in range(3): 630 | self.P[j,i] = ncm[j, i] 631 | self.mapx, self.mapy = cv2.initUndistortRectifyMap(self.intrinsics, self.distortion, self.R, ncm, self.size, cv2.CV_32FC1) 632 | 633 | def remap(self, src): 634 | """ 635 | :param src: source image 636 | :type src: :class:`cvMat` 637 | 638 | Apply the post-calibration undistortion to the source image 639 | """ 640 | return cv2.remap(src, self.mapx, self.mapy, cv2.INTER_LINEAR) 641 | 642 | def undistort_points(self, src): 643 | """ 644 | :param src: N source pixel points (u,v) as an Nx2 matrix 645 | :type src: :class:`cvMat` 646 | 647 | Apply the post-calibration undistortion to the source points 648 | """ 649 | 650 | return cv2.undistortPoints(src, self.intrinsics, self.distortion, R = self.R, P = self.P) 651 | 652 | def as_message(self): 653 | """ Return the camera calibration as a CameraInfo message """ 654 | return self.lrmsg(self.distortion, self.intrinsics, self.R, self.P) 655 | 656 | def from_message(self, msg, alpha = 0.0): 657 | """ Initialize the camera calibration from a CameraInfo message """ 658 | 659 | self.size = (msg.width, msg.height) 660 | self.intrinsics = numpy.array(msg.K, dtype=numpy.float64, copy=True).reshape((3, 3)) 661 | self.distortion = numpy.array(msg.D, dtype=numpy.float64, copy=True).reshape((len(msg.D), 1)) 662 | self.R = numpy.array(msg.R, dtype=numpy.float64, copy=True).reshape((3, 3)) 663 | self.P = numpy.array(msg.P, dtype=numpy.float64, copy=True).reshape((3, 4)) 664 | 665 | self.set_alpha(0.0) 666 | 667 | def report(self): 668 | self.lrreport(self.distortion, self.intrinsics, self.R, self.P) 669 | 670 | def ost(self): 671 | return self.lrost(self.name, self.distortion, self.intrinsics, self.R, self.P) 672 | 673 | def yaml(self): 674 | return self.lryaml(self.name, self.distortion, self.intrinsics, self.R, self.P) 675 | 676 | def linear_error_from_image(self, image): 677 | """ 678 | Detect the checkerboard and compute the linear error. 679 | Mainly for use in tests. 680 | """ 681 | _, corners, _, board, _ = self.downsample_and_detect(image) 682 | if corners is None: 683 | return None 684 | 685 | undistorted = self.undistort_points(corners) 686 | return self.linear_error(undistorted, board) 687 | 688 | @staticmethod 689 | def linear_error(corners, b): 690 | 691 | """ 692 | Returns the linear error for a set of corners detected in the unrectified image. 693 | """ 694 | 695 | if corners is None: 696 | return None 697 | 698 | def pt2line(x0, y0, x1, y1, x2, y2): 699 | """ point is (x0, y0), line is (x1, y1, x2, y2) """ 700 | return abs((x2 - x1) * (y1 - y0) - (x1 - x0) * (y2 - y1)) / math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2) 701 | 702 | cc = b.n_cols 703 | cr = b.n_rows 704 | errors = [] 705 | for r in range(cr): 706 | (x1, y1) = corners[(cc * r) + 0, 0] 707 | (x2, y2) = corners[(cc * r) + cc - 1, 0] 708 | for i in range(1, cc - 1): 709 | (x0, y0) = corners[(cc * r) + i, 0] 710 | errors.append(pt2line(x0, y0, x1, y1, x2, y2)) 711 | if errors: 712 | return math.sqrt(sum([e**2 for e in errors]) / len(errors)) 713 | else: 714 | return None 715 | 716 | 717 | def handle_msg(self, msg): 718 | """ 719 | Detects the calibration target and, if found and provides enough new information, 720 | adds it to the sample database. 721 | 722 | Returns a MonoDrawable message with the display image and progress info. 723 | """ 724 | gray = self.mkgray(msg) 725 | linear_error = -1 726 | 727 | # Get display-image-to-be (scrib) and detection of the calibration target 728 | scrib_mono, corners, downsampled_corners, board, (x_scale, y_scale) = self.downsample_and_detect(gray) 729 | 730 | if self.calibrated: 731 | # Show rectified image 732 | # TODO Pull out downsampling code into function 733 | gray_remap = self.remap(gray) 734 | gray_rect = gray_remap 735 | if x_scale != 1.0 or y_scale != 1.0: 736 | gray_rect = cv2.resize(gray_remap, (scrib_mono.shape[1], scrib_mono.shape[0])) 737 | 738 | scrib = cv2.cvtColor(gray_rect, cv2.COLOR_GRAY2BGR) 739 | 740 | if corners is not None: 741 | # Report linear error 742 | undistorted = self.undistort_points(corners) 743 | linear_error = self.linear_error(undistorted, board) 744 | 745 | # Draw rectified corners 746 | scrib_src = undistorted.copy() 747 | scrib_src[:,:,0] /= x_scale 748 | scrib_src[:,:,1] /= y_scale 749 | cv2.drawChessboardCorners(scrib, (board.n_cols, board.n_rows), scrib_src, True) 750 | 751 | else: 752 | scrib = cv2.cvtColor(scrib_mono, cv2.COLOR_GRAY2BGR) 753 | if corners is not None: 754 | # Draw (potentially downsampled) corners onto display image 755 | cv2.drawChessboardCorners(scrib, (board.n_cols, board.n_rows), downsampled_corners, True) 756 | 757 | # Add sample to database only if it's sufficiently different from any previous sample. 758 | params = self.get_parameters(corners, board, (gray.shape[1], gray.shape[0])) 759 | if self.is_good_sample(params): 760 | self.db.append((params, gray)) 761 | self.good_corners.append((corners, board)) 762 | print(("*** Added sample %d, p_x = %.3f, p_y = %.3f, p_size = %.3f, skew = %.3f" % tuple([len(self.db)] + params))) 763 | 764 | if self.accu_image is None: 765 | self.accu_image = numpy.zeros(scrib.shape, numpy.uint8) 766 | else: 767 | pts = numpy.array([downsampled_corners[0][0], 768 | downsampled_corners[board.n_cols-1][0], 769 | downsampled_corners[-1][0], 770 | downsampled_corners[-(board.n_cols)][0]], numpy.int32) 771 | cv2.polylines(self.accu_image, [pts], isClosed=True, color=(0,0,255), thickness=1, lineType=8) 772 | 773 | rv = MonoDrawable() 774 | rv.scrib = scrib 775 | rv.params = self.compute_goodenough() 776 | rv.linear_error = linear_error 777 | 778 | if self.accu_image is not None: 779 | rv.extra_image = self.accu_image 780 | else: 781 | rv.extra_image = None 782 | 783 | return rv 784 | 785 | def do_calibration(self, dump = False): 786 | if not self.good_corners: 787 | print("**** Collecting corners for all images! ****") #DEBUG 788 | images = [i for (p, i) in self.db] 789 | self.good_corners = self.collect_corners(images) 790 | # Dump should only occur if user wants it 791 | if dump: 792 | pickle.dump((self.is_mono, self.size, self.good_corners), 793 | open("/tmp/camera_calibration_%08x.pickle" % random.getrandbits(32), "w")) 794 | self.size = (self.db[0][1].shape[1], self.db[0][1].shape[0]) # TODO Needs to be set externally 795 | self.cal_fromcorners(self.good_corners) 796 | self.calibrated = True 797 | # DEBUG 798 | print((self.report())) 799 | print((self.ost())) 800 | 801 | def do_tarfile_save(self, tf): 802 | """ Write images and calibration solution to a tarfile object """ 803 | 804 | def taradd(name, buf): 805 | s = StringIO(buf) 806 | ti = tarfile.TarInfo(name) 807 | ti.size = len(s.getvalue()) 808 | ti.uname = 'calibrator' 809 | ti.mtime = int(time.time()) 810 | tf.addfile(tarinfo=ti, fileobj=s) 811 | 812 | ims = [("left-%04d.png" % i, im) for i,(_, im) in enumerate(self.db)] 813 | for (name, im) in ims: 814 | taradd(name, cv2.imencode(".png", im)[1].tostring()) 815 | 816 | if self.calibrated: 817 | taradd('ost.yaml', self.yaml()) 818 | taradd('ost.txt', self.ost()) 819 | else: 820 | print("Doing none-calibration tarfile save!") 821 | 822 | def do_tarfile_calibration(self, filename): 823 | archive = tarfile.open(filename, 'r') 824 | 825 | limages = [ image_from_archive(archive, f) for f in archive.getnames() if (f.startswith('left') and (f.endswith('.pgm') or f.endswith('png'))) ] 826 | 827 | self.cal(limages) 828 | 829 | # TODO Replicate MonoCalibrator improvements in stereo 830 | class StereoCalibrator(Calibrator): 831 | """ 832 | Calibration class for stereo cameras:: 833 | 834 | limages = [cv2.imread("left%d.png") for i in range(8)] 835 | rimages = [cv2.imread("right%d.png") for i in range(8)] 836 | sc = StereoCalibrator() 837 | sc.cal(limages, rimages) 838 | print sc.as_message() 839 | """ 840 | 841 | is_mono = False 842 | 843 | def __init__(self, *args, **kwargs): 844 | if 'name' not in kwargs: 845 | kwargs['name'] = 'narrow_stereo' 846 | super(StereoCalibrator, self).__init__(*args, **kwargs) 847 | self.l = MonoCalibrator(*args, **kwargs) 848 | self.r = MonoCalibrator(*args, **kwargs) 849 | # Collecting from two cameras in a horizontal stereo rig, can't get 850 | # full X range in the left camera. 851 | self.param_ranges[0] = 0.4 852 | 853 | def cal(self, limages, rimages): 854 | """ 855 | :param limages: source left images containing chessboards 856 | :type limages: list of :class:`cvMat` 857 | :param rimages: source right images containing chessboards 858 | :type rimages: list of :class:`cvMat` 859 | 860 | Find chessboards in images, and runs the OpenCV calibration solver. 861 | """ 862 | goodcorners = self.collect_corners(limages, rimages) 863 | self.size = (limages[0].shape[1], limages[0].shape[0]) 864 | self.l.size = self.size 865 | self.r.size = self.size 866 | self.cal_fromcorners(goodcorners) 867 | self.calibrated = True 868 | 869 | def collect_corners(self, limages, rimages): 870 | """ 871 | For a sequence of left and right images, find pairs of images where both 872 | left and right have a chessboard, and return their corners as a list of pairs. 873 | """ 874 | # Pick out (corners, board) tuples 875 | lcorners = [ self.downsample_and_detect(i)[1:4:2] for i in limages] 876 | rcorners = [ self.downsample_and_detect(i)[1:4:2] for i in rimages] 877 | good = [(lco, rco, b) for ((lco, b), (rco, br)) in zip( lcorners, rcorners) 878 | if (lco is not None and rco is not None)] 879 | 880 | if len(good) == 0: 881 | raise CalibrationException("No corners found in images!") 882 | return good 883 | 884 | def cal_fromcorners(self, good): 885 | # Perform monocular calibrations 886 | lcorners = [(l, b) for (l, r, b) in good] 887 | rcorners = [(r, b) for (l, r, b) in good] 888 | self.l.cal_fromcorners(lcorners) 889 | self.r.cal_fromcorners(rcorners) 890 | 891 | lipts = [ l for (l, _, _) in good ] 892 | ripts = [ r for (_, r, _) in good ] 893 | boards = [ b for (_, _, b) in good ] 894 | 895 | opts = self.mk_object_points(boards, True) 896 | 897 | flags = cv2.CALIB_FIX_INTRINSIC 898 | 899 | self.T = numpy.zeros((3, 1), dtype=numpy.float64) 900 | self.R = numpy.eye(3, dtype=numpy.float64) 901 | if LooseVersion(cv2.__version__).version[0] == 2: 902 | cv2.stereoCalibrate(opts, lipts, ripts, self.size, 903 | self.l.intrinsics, self.l.distortion, 904 | self.r.intrinsics, self.r.distortion, 905 | self.R, # R 906 | self.T, # T 907 | criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 1, 1e-5), 908 | flags = flags) 909 | else: 910 | cv2.stereoCalibrate(opts, lipts, ripts, 911 | self.l.intrinsics, self.l.distortion, 912 | self.r.intrinsics, self.r.distortion, 913 | self.size, 914 | self.R, # R 915 | self.T, # T 916 | criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 1, 1e-5), 917 | flags = flags) 918 | 919 | self.set_alpha(0.0) 920 | 921 | def set_alpha(self, a): 922 | """ 923 | Set the alpha value for the calibrated camera solution. The 924 | alpha value is a zoom, and ranges from 0 (zoomed in, all pixels 925 | in calibrated image are valid) to 1 (zoomed out, all pixels in 926 | original image are in calibrated image). 927 | """ 928 | 929 | cv2.stereoRectify(self.l.intrinsics, 930 | self.l.distortion, 931 | self.r.intrinsics, 932 | self.r.distortion, 933 | self.size, 934 | self.R, 935 | self.T, 936 | self.l.R, self.r.R, self.l.P, self.r.P, 937 | alpha = a) 938 | 939 | cv2.initUndistortRectifyMap(self.l.intrinsics, self.l.distortion, self.l.R, self.l.P, self.size, cv2.CV_32FC1, 940 | self.l.mapx, self.l.mapy) 941 | cv2.initUndistortRectifyMap(self.r.intrinsics, self.r.distortion, self.r.R, self.r.P, self.size, cv2.CV_32FC1, 942 | self.r.mapx, self.r.mapy) 943 | 944 | def as_message(self): 945 | """ 946 | Return the camera calibration as a pair of CameraInfo messages, for left 947 | and right cameras respectively. 948 | """ 949 | 950 | return (self.lrmsg(self.l.distortion, self.l.intrinsics, self.l.R, self.l.P), 951 | self.lrmsg(self.r.distortion, self.r.intrinsics, self.r.R, self.r.P)) 952 | 953 | def from_message(self, msgs, alpha = 0.0): 954 | """ Initialize the camera calibration from a pair of CameraInfo messages. """ 955 | self.size = (msgs[0].width, msgs[0].height) 956 | 957 | self.T = numpy.zeros((3, 1), dtype=numpy.float64) 958 | self.R = numpy.eye(3, dtype=numpy.float64) 959 | 960 | self.l.from_message(msgs[0]) 961 | self.r.from_message(msgs[1]) 962 | # Need to compute self.T and self.R here, using the monocular parameters above 963 | if False: 964 | self.set_alpha(0.0) 965 | 966 | def report(self): 967 | print("\nLeft:") 968 | self.lrreport(self.l.distortion, self.l.intrinsics, self.l.R, self.l.P) 969 | print("\nRight:") 970 | self.lrreport(self.r.distortion, self.r.intrinsics, self.r.R, self.r.P) 971 | print("self.T ", numpy.ravel(self.T).tolist()) 972 | print("self.R ", numpy.ravel(self.R).tolist()) 973 | 974 | def ost(self): 975 | return (self.lrost(self.name + "/left", self.l.distortion, self.l.intrinsics, self.l.R, self.l.P) + 976 | self.lrost(self.name + "/right", self.r.distortion, self.r.intrinsics, self.r.R, self.r.P)) 977 | 978 | def yaml(self, suffix, info): 979 | return self.lryaml(self.name + suffix, info.distortion, info.intrinsics, info.R, info.P) 980 | 981 | # TODO Get rid of "from_images" versions of these, instead have function to get undistorted corners 982 | def epipolar_error_from_images(self, limage, rimage): 983 | """ 984 | Detect the checkerboard in both images and compute the epipolar error. 985 | Mainly for use in tests. 986 | """ 987 | lcorners = self.downsample_and_detect(limage)[1] 988 | rcorners = self.downsample_and_detect(rimage)[1] 989 | if lcorners is None or rcorners is None: 990 | return None 991 | 992 | lundistorted = self.l.undistort_points(lcorners) 993 | rundistorted = self.r.undistort_points(rcorners) 994 | 995 | return self.epipolar_error(lundistorted, rundistorted) 996 | 997 | def epipolar_error(self, lcorners, rcorners): 998 | """ 999 | Compute the epipolar error from two sets of matching undistorted points 1000 | """ 1001 | d = lcorners[:,:,1] - rcorners[:,:,1] 1002 | return numpy.sqrt(numpy.square(d).sum() / d.size) 1003 | 1004 | def chessboard_size_from_images(self, limage, rimage): 1005 | _, lcorners, _, board, _ = self.downsample_and_detect(limage) 1006 | _, rcorners, _, board, _ = self.downsample_and_detect(rimage) 1007 | if lcorners is None or rcorners is None: 1008 | return None 1009 | 1010 | lundistorted = self.l.undistort_points(lcorners) 1011 | rundistorted = self.r.undistort_points(rcorners) 1012 | 1013 | return self.chessboard_size(lundistorted, rundistorted, board) 1014 | 1015 | def chessboard_size(self, lcorners, rcorners, board, msg = None): 1016 | """ 1017 | Compute the square edge length from two sets of matching undistorted points 1018 | given the current calibration. 1019 | :param msg: a tuple of (left_msg, right_msg) 1020 | """ 1021 | # Project the points to 3d 1022 | cam = image_geometry.StereoCameraModel() 1023 | if msg == None: 1024 | msg = self.as_message() 1025 | cam.fromCameraInfo(*msg) 1026 | disparities = lcorners[:,:,0] - rcorners[:,:,0] 1027 | pt3d = [cam.projectPixelTo3d((lcorners[i,0,0], lcorners[i,0,1]), disparities[i,0]) for i in range(lcorners.shape[0]) ] 1028 | def l2(p0, p1): 1029 | return math.sqrt(sum([(c0 - c1) ** 2 for (c0, c1) in zip(p0, p1)])) 1030 | 1031 | # Compute the length from each horizontal and vertical line, and return the mean 1032 | cc = board.n_cols 1033 | cr = board.n_rows 1034 | lengths = ( 1035 | [l2(pt3d[cc * r + 0], pt3d[cc * r + (cc - 1)]) / (cc - 1) for r in range(cr)] + 1036 | [l2(pt3d[c + 0], pt3d[c + (cc * (cr - 1))]) / (cr - 1) for c in range(cc)]) 1037 | return sum(lengths) / len(lengths) 1038 | 1039 | def handle_msg(self, msg): 1040 | # TODO Various asserts that images have same dimension, same board detected... 1041 | (lmsg, rmsg) = msg 1042 | lgray = self.mkgray(lmsg) 1043 | rgray = self.mkgray(rmsg) 1044 | epierror = -1 1045 | 1046 | # Get display-images-to-be and detections of the calibration target 1047 | lscrib_mono, lcorners, ldownsampled_corners, lboard, (x_scale, y_scale) = self.downsample_and_detect(lgray) 1048 | rscrib_mono, rcorners, rdownsampled_corners, rboard, _ = self.downsample_and_detect(rgray) 1049 | 1050 | if self.calibrated: 1051 | # Show rectified images 1052 | lremap = self.l.remap(lgray) 1053 | rremap = self.r.remap(rgray) 1054 | lrect = lremap 1055 | rrect = rremap 1056 | if x_scale != 1.0 or y_scale != 1.0: 1057 | lrect = cv2.resize(lremap, (lscrib_mono.shape[1], lscrib_mono.shape[0])) 1058 | rrect = cv2.resize(rremap, (rscrib_mono.shape[1], rscrib_mono.shape[0])) 1059 | 1060 | lscrib = cv2.cvtColor(lrect, cv2.COLOR_GRAY2BGR) 1061 | rscrib = cv2.cvtColor(rrect, cv2.COLOR_GRAY2BGR) 1062 | 1063 | # Draw rectified corners 1064 | if lcorners is not None: 1065 | lundistorted = self.l.undistort_points(lcorners) 1066 | scrib_src = lundistorted.copy() 1067 | scrib_src[:,:,0] /= x_scale 1068 | scrib_src[:,:,1] /= y_scale 1069 | cv2.drawChessboardCorners(lscrib, (lboard.n_cols, lboard.n_rows), scrib_src, True) 1070 | 1071 | if rcorners is not None: 1072 | rundistorted = self.r.undistort_points(rcorners) 1073 | scrib_src = rundistorted.copy() 1074 | scrib_src[:,:,0] /= x_scale 1075 | scrib_src[:,:,1] /= y_scale 1076 | cv2.drawChessboardCorners(rscrib, (rboard.n_cols, rboard.n_rows), scrib_src, True) 1077 | 1078 | # Report epipolar error 1079 | if lcorners is not None and rcorners is not None: 1080 | epierror = self.epipolar_error(lundistorted, rundistorted) 1081 | 1082 | else: 1083 | lscrib = cv2.cvtColor(lscrib_mono, cv2.COLOR_GRAY2BGR) 1084 | rscrib = cv2.cvtColor(rscrib_mono, cv2.COLOR_GRAY2BGR) 1085 | # Draw any detected chessboards onto display (downsampled) images 1086 | if lcorners is not None: 1087 | cv2.drawChessboardCorners(lscrib, (lboard.n_cols, lboard.n_rows), 1088 | ldownsampled_corners, True) 1089 | if rcorners is not None: 1090 | cv2.drawChessboardCorners(rscrib, (rboard.n_cols, rboard.n_rows), 1091 | rdownsampled_corners, True) 1092 | 1093 | # Add sample to database only if it's sufficiently different from any previous sample 1094 | if lcorners is not None and rcorners is not None: 1095 | params = self.get_parameters(lcorners, lboard, (lgray.shape[1], lgray.shape[0])) 1096 | if self.is_good_sample(params): 1097 | self.db.append( (params, lgray, rgray) ) 1098 | self.good_corners.append( (lcorners, rcorners, lboard) ) 1099 | print(("*** Added sample %d, p_x = %.3f, p_y = %.3f, p_size = %.3f, skew = %.3f" % tuple([len(self.db)] + params))) 1100 | 1101 | if self.accu_image is None: 1102 | self.accu_image = numpy.zeros(lgray.shape, numpy.uint8) 1103 | else: 1104 | pts = numpy.array([ ldownsampled_corners[0][0], 1105 | ldownsampled_corners[lboard.n_cols-1][0], 1106 | ldownsampled_corners[-1][0], 1107 | ldownsampled_corners[-(lboard.n_cols)][0]], numpy.int32) 1108 | cv2.polylines(self.accu_image, [pts], isClosed=True, color=(0,0,255), thickness=1, lineType=8) 1109 | 1110 | rv = StereoDrawable() 1111 | rv.lscrib = lscrib 1112 | rv.rscrib = rscrib 1113 | rv.params = self.compute_goodenough() 1114 | rv.epierror = epierror 1115 | 1116 | if self.accu_image is not None: 1117 | rv.extra_image = self.accu_image 1118 | else: 1119 | rv.extra_image = None 1120 | 1121 | return rv 1122 | 1123 | def do_calibration(self, dump = False): 1124 | # TODO MonoCalibrator collects corners if needed here 1125 | # Dump should only occur if user wants it 1126 | if dump: 1127 | pickle.dump((self.is_mono, self.size, self.good_corners), 1128 | open("/tmp/camera_calibration_%08x.pickle" % random.getrandbits(32), "w")) 1129 | self.size = (self.db[0][1].shape[1], self.db[0][1].shape[0]) # TODO Needs to be set externally 1130 | self.l.size = self.size 1131 | self.r.size = self.size 1132 | self.cal_fromcorners(self.good_corners) 1133 | self.calibrated = True 1134 | # DEBUG 1135 | print((self.report())) 1136 | print((self.ost())) 1137 | 1138 | def do_tarfile_save(self, tf): 1139 | """ Write images and calibration solution to a tarfile object """ 1140 | ims = ([("left-%04d.png" % i, im) for i,(_, im, _) in enumerate(self.db)] + 1141 | [("right-%04d.png" % i, im) for i,(_, _, im) in enumerate(self.db)]) 1142 | 1143 | def taradd(name, buf): 1144 | s = StringIO(buf) 1145 | ti = tarfile.TarInfo(name) 1146 | ti.size = len(s.getvalue()) 1147 | ti.uname = 'calibrator' 1148 | ti.mtime = int(time.time()) 1149 | tf.addfile(tarinfo=ti, fileobj=s) 1150 | 1151 | for (name, im) in ims: 1152 | taradd(name, cv2.imencode(".png", im)[1].tostring()) 1153 | taradd('left.yaml', self.yaml("/left", self.l)) 1154 | taradd('right.yaml', self.yaml("/right", self.r)) 1155 | taradd('ost.txt', self.ost()) 1156 | 1157 | def do_tarfile_calibration(self, filename): 1158 | archive = tarfile.open(filename, 'r') 1159 | limages = [ image_from_archive(archive, f) for f in archive.getnames() if (f.startswith('left') and (f.endswith('pgm') or f.endswith('png'))) ] 1160 | rimages = [ image_from_archive(archive, f) for f in archive.getnames() if (f.startswith('right') and (f.endswith('pgm') or f.endswith('png'))) ] 1161 | 1162 | if not len(limages) == len(rimages): 1163 | raise CalibrationException("Left, right images don't match. %d left images, %d right" % (len(limages), len(rimages))) 1164 | 1165 | ##\todo Check that the filenames match and stuff 1166 | 1167 | self.cal(limages, rimages) 1168 | -------------------------------------------------------------------------------- /scripts/cameracalibrator.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Software License Agreement (BSD License) 4 | # 5 | # Copyright (c) 2009, Willow Garage, Inc. 6 | # All rights reserved. 7 | # 8 | # Redistribution and use in source and binary forms, with or without 9 | # modification, are permitted provided that the following conditions 10 | # are met: 11 | # 12 | # * Redistributions of source code must retain the above copyright 13 | # notice, this list of conditions and the following disclaimer. 14 | # * Redistributions in binary form must reproduce the above 15 | # copyright notice, this list of conditions and the following 16 | # disclaimer in the documentation and/or other materials provided 17 | # with the distribution. 18 | # * Neither the name of the Willow Garage nor the names of its 19 | # contributors may be used to endorse or promote products derived 20 | # from this software without specific prior written permission. 21 | # 22 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 23 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 24 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 25 | # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE 26 | # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, 27 | # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 28 | # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 | # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 30 | # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 31 | # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN 32 | # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 33 | # POSSIBILITY OF SUCH DAMAGE. 34 | 35 | import rospy 36 | import sensor_msgs.msg 37 | import sensor_msgs.srv 38 | import message_filters 39 | from message_filters import ApproximateTimeSynchronizer 40 | 41 | import os 42 | from collections import deque 43 | import threading 44 | import functools 45 | import time 46 | 47 | import cv2 48 | import numpy 49 | 50 | from camera_calibration_frontend.calibrator import MonoCalibrator, StereoCalibrator, ChessboardInfo, Patterns 51 | from std_msgs.msg import String 52 | from std_srvs.srv import Empty 53 | 54 | class DisplayThread(threading.Thread): 55 | """ 56 | Thread that displays the current images 57 | It is its own thread so that all display can be done 58 | in one thread to overcome imshow limitations and 59 | https://github.com/ros-perception/image_pipeline/issues/85 60 | """ 61 | def __init__(self, queue, opencv_calibration_node, extra_queue=None): 62 | threading.Thread.__init__(self) 63 | self.queue = queue 64 | self.opencv_calibration_node = opencv_calibration_node 65 | self.extra_queue = extra_queue 66 | 67 | def run(self): 68 | cv2.namedWindow("display", cv2.WINDOW_NORMAL) 69 | cv2.setMouseCallback("display", self.opencv_calibration_node.on_mouse) 70 | cv2.createTrackbar("scale", "display", 0, 100, self.opencv_calibration_node.on_scale) 71 | 72 | if self.extra_queue: 73 | cv2.namedWindow("extra", cv2.WINDOW_NORMAL) 74 | 75 | while True: 76 | # wait for an image (could happen at the very beginning when the queue is still empty) 77 | while len(self.queue) == 0: 78 | time.sleep(0.1) 79 | im = self.queue[0] 80 | cv2.imshow("display", im) 81 | k = cv2.waitKey(6) & 0xFF 82 | if k in [27, ord('q')]: 83 | rospy.signal_shutdown('Quit') 84 | elif k == ord('s'): 85 | self.opencv_calibration_node.screendump(im) 86 | if self.extra_queue: 87 | if len(self.extra_queue): 88 | extra_img = self.extra_queue[0] 89 | cv2.imshow("extra", extra_img) 90 | 91 | class ConsumerThread(threading.Thread): 92 | def __init__(self, queue, function): 93 | threading.Thread.__init__(self) 94 | self.queue = queue 95 | self.function = function 96 | 97 | def run(self): 98 | while True: 99 | # wait for an image (could happen at the very beginning when the queue is still empty) 100 | while len(self.queue) == 0: 101 | time.sleep(0.1) 102 | self.function(self.queue[0]) 103 | 104 | 105 | class CalibrationNode: 106 | def __init__(self, boards, service_check = True, synchronizer = message_filters.TimeSynchronizer, flags = 0, 107 | pattern=Patterns.Chessboard, camera_name='', checkerboard_flags = 0): 108 | if service_check: 109 | # assume any non-default service names have been set. Wait for the service to become ready 110 | for svcname in ["camera", "left_camera", "right_camera"]: 111 | remapped = rospy.remap_name(svcname) 112 | if remapped != svcname: 113 | fullservicename = "%s/set_camera_info" % remapped 114 | print("Waiting for service", fullservicename, "...") 115 | try: 116 | rospy.wait_for_service(fullservicename, 5) 117 | print("OK") 118 | except rospy.ROSException: 119 | print("Service not found") 120 | rospy.signal_shutdown('Quit') 121 | 122 | self._boards = boards 123 | self._calib_flags = flags 124 | self._checkerboard_flags = checkerboard_flags 125 | self._pattern = pattern 126 | self._camera_name = camera_name 127 | lsub = message_filters.Subscriber('left', sensor_msgs.msg.Image) 128 | rsub = message_filters.Subscriber('right', sensor_msgs.msg.Image) 129 | ts = synchronizer([lsub, rsub], 4) 130 | ts.registerCallback(self.queue_stereo) 131 | 132 | msub = message_filters.Subscriber('image', sensor_msgs.msg.Image) 133 | msub.registerCallback(self.queue_monocular) 134 | 135 | self.set_camera_info_service = rospy.ServiceProxy("%s/set_camera_info" % rospy.remap_name("camera"), 136 | sensor_msgs.srv.SetCameraInfo) 137 | self.set_left_camera_info_service = rospy.ServiceProxy("%s/set_camera_info" % rospy.remap_name("left_camera"), 138 | sensor_msgs.srv.SetCameraInfo) 139 | self.set_right_camera_info_service = rospy.ServiceProxy("%s/set_camera_info" % rospy.remap_name("right_camera"), 140 | sensor_msgs.srv.SetCameraInfo) 141 | 142 | self.q_mono = deque([], 1) 143 | self.q_stereo = deque([], 1) 144 | 145 | self.c = None 146 | 147 | mth = ConsumerThread(self.q_mono, self.handle_monocular) 148 | mth.setDaemon(True) 149 | mth.start() 150 | 151 | sth = ConsumerThread(self.q_stereo, self.handle_stereo) 152 | sth.setDaemon(True) 153 | sth.start() 154 | 155 | def redraw_stereo(self, *args): 156 | pass 157 | def redraw_monocular(self, *args): 158 | pass 159 | 160 | def queue_monocular(self, msg): 161 | self.q_mono.append(msg) 162 | 163 | def queue_stereo(self, lmsg, rmsg): 164 | self.q_stereo.append((lmsg, rmsg)) 165 | 166 | def handle_monocular(self, msg): 167 | if self.c == None: 168 | if self._camera_name: 169 | self.c = MonoCalibrator(self._boards, self._calib_flags, self._pattern, name=self._camera_name, 170 | checkerboard_flags=self._checkerboard_flags) 171 | else: 172 | self.c = MonoCalibrator(self._boards, self._calib_flags, self._pattern, 173 | checkerboard_flags=self.checkerboard_flags) 174 | 175 | # This should just call the MonoCalibrator 176 | drawable = self.c.handle_msg(msg) 177 | self.displaywidth = drawable.scrib.shape[1] 178 | self.redraw_monocular(drawable) 179 | 180 | def handle_stereo(self, msg): 181 | if self.c == None: 182 | if self._camera_name: 183 | self.c = StereoCalibrator(self._boards, self._calib_flags, self._pattern, name=self._camera_name, 184 | checkerboard_flags=self._checkerboard_flags) 185 | else: 186 | self.c = StereoCalibrator(self._boards, self._calib_flags, self._pattern, 187 | checkerboard_flags=self._checkerboard_flags) 188 | 189 | drawable = self.c.handle_msg(msg) 190 | self.displaywidth = drawable.lscrib.shape[1] + drawable.rscrib.shape[1] 191 | self.redraw_stereo(drawable) 192 | 193 | 194 | def check_set_camera_info(self, response): 195 | if response.success: 196 | return True 197 | 198 | for i in range(10): 199 | print("!" * 80) 200 | print() 201 | print("Attempt to set camera info failed: " + response.status_message) 202 | print() 203 | for i in range(10): 204 | print("!" * 80) 205 | print() 206 | rospy.logerr('Unable to set camera info for calibration. Failure message: %s' % response.status_message) 207 | return False 208 | 209 | def do_upload(self): 210 | self.c.report() 211 | print(self.c.ost()) 212 | info = self.c.as_message() 213 | 214 | rv = True 215 | if self.c.is_mono: 216 | response = self.set_camera_info_service(info) 217 | rv = self.check_set_camera_info(response) 218 | else: 219 | response = self.set_left_camera_info_service(info[0]) 220 | rv = rv and self.check_set_camera_info(response) 221 | response = self.set_right_camera_info_service(info[1]) 222 | rv = rv and self.check_set_camera_info(response) 223 | return rv 224 | 225 | 226 | class OpenCVCalibrationNode(CalibrationNode): 227 | """ Calibration node with an OpenCV Gui """ 228 | FONT_FACE = cv2.FONT_HERSHEY_SIMPLEX 229 | FONT_SCALE = 0.6 230 | FONT_THICKNESS = 2 231 | 232 | def __init__(self, *args, **kwargs): 233 | 234 | CalibrationNode.__init__(self, *args, **kwargs) 235 | 236 | self.queue_display = deque([], 1) 237 | self.queue_extra_display = deque([], 1) 238 | self.display_thread = DisplayThread(self.queue_display, self, self.queue_extra_display) 239 | self.display_thread.setDaemon(True) 240 | self.display_thread.start() 241 | 242 | @classmethod 243 | def putText(cls, img, text, org, color = (0,0,0)): 244 | cv2.putText(img, text, org, cls.FONT_FACE, cls.FONT_SCALE, color, thickness = cls.FONT_THICKNESS) 245 | 246 | @classmethod 247 | def getTextSize(cls, text): 248 | return cv2.getTextSize(text, cls.FONT_FACE, cls.FONT_SCALE, cls.FONT_THICKNESS)[0] 249 | 250 | def on_mouse(self, event, x, y, flags, param): 251 | if event == cv2.EVENT_LBUTTONDOWN and self.displaywidth < x: 252 | if self.c.goodenough: 253 | if 180 <= y < 280: 254 | self.c.do_calibration() 255 | 256 | if 280 <= y < 380: 257 | self.c.do_save() 258 | 259 | if self.c.calibrated: 260 | if 380 <= y < 480: 261 | # Only shut down if we set camera info correctly, #3993 262 | if self.do_upload(): 263 | rospy.signal_shutdown('Quit') 264 | 265 | def on_scale(self, scalevalue): 266 | if self.c.calibrated: 267 | self.c.set_alpha(scalevalue / 100.0) 268 | 269 | def button(self, dst, label, enable): 270 | dst.fill(255) 271 | size = (dst.shape[1], dst.shape[0]) 272 | if enable: 273 | color = (155, 155, 80) 274 | else: 275 | color = (224, 224, 224) 276 | cv2.circle(dst, (size[0] / 2, size[1] / 2), min(size) / 2, color, -1) 277 | (w, h) = self.getTextSize(label) 278 | self.putText(dst, label, ((size[0] - w) / 2, (size[1] + h) / 2), (255,255,255)) 279 | 280 | def buttons(self, display): 281 | x = self.displaywidth 282 | self.button(display[180:280,x:x+100], "CALIBRATE", self.c.goodenough) 283 | # self.button(display[280:380,x:x+100], "SAVE", self.c.calibrated) 284 | self.button(display[280:380,x:x+100], "SAVE", True) 285 | self.button(display[380:480,x:x+100], "COMMIT", self.c.calibrated) 286 | 287 | def y(self, i): 288 | """Set up right-size images""" 289 | return 30 + 40 * i 290 | 291 | def screendump(self, im): 292 | i = 0 293 | while os.access("/tmp/dump%d.png" % i, os.R_OK): 294 | i += 1 295 | cv2.imwrite("/tmp/dump%d.png" % i, im) 296 | 297 | def redraw_monocular(self, drawable): 298 | height = drawable.scrib.shape[0] 299 | width = drawable.scrib.shape[1] 300 | 301 | display = numpy.zeros((max(480, height), width + 100, 3), dtype=numpy.uint8) 302 | display[0:height, 0:width,:] = drawable.scrib 303 | display[0:height, width:width+100,:].fill(255) 304 | 305 | 306 | self.buttons(display) 307 | if not self.c.calibrated: 308 | if drawable.params: 309 | for i, (label, lo, hi, progress) in enumerate(drawable.params): 310 | (w,_) = self.getTextSize(label) 311 | self.putText(display, label, (width + (100 - w) / 2, self.y(i))) 312 | color = (0,255,0) 313 | if progress < 1.0: 314 | color = (0, int(progress*255.), 255) 315 | cv2.line(display, 316 | (int(width + lo * 100), self.y(i) + 20), 317 | (int(width + hi * 100), self.y(i) + 20), 318 | color, 4) 319 | 320 | else: 321 | self.putText(display, "lin.", (width, self.y(0))) 322 | linerror = drawable.linear_error 323 | if linerror < 0: 324 | msg = "?" 325 | else: 326 | msg = "%.2f" % linerror 327 | #print "linear", linerror 328 | self.putText(display, msg, (width, self.y(1))) 329 | 330 | self.queue_display.append(display) 331 | 332 | if drawable.extra_image is not None: 333 | self.queue_extra_display.append(drawable.extra_image) 334 | 335 | 336 | 337 | 338 | def redraw_stereo(self, drawable): 339 | height = drawable.lscrib.shape[0] 340 | width = drawable.lscrib.shape[1] 341 | 342 | display = numpy.zeros((max(480, height), 2 * width + 100, 3), dtype=numpy.uint8) 343 | display[0:height, 0:width,:] = drawable.lscrib 344 | display[0:height, width:2*width,:] = drawable.rscrib 345 | display[0:height, 2*width:2*width+100,:].fill(255) 346 | 347 | self.buttons(display) 348 | 349 | if not self.c.calibrated: 350 | if drawable.params: 351 | for i, (label, lo, hi, progress) in enumerate(drawable.params): 352 | (w,_) = self.getTextSize(label) 353 | self.putText(display, label, (2 * width + (100 - w) / 2, self.y(i))) 354 | color = (0,255,0) 355 | if progress < 1.0: 356 | color = (0, int(progress*255.), 255) 357 | cv2.line(display, 358 | (int(2 * width + lo * 100), self.y(i) + 20), 359 | (int(2 * width + hi * 100), self.y(i) + 20), 360 | color, 4) 361 | 362 | else: 363 | self.putText(display, "epi.", (2 * width, self.y(0))) 364 | if drawable.epierror == -1: 365 | msg = "?" 366 | else: 367 | msg = "%.2f" % drawable.epierror 368 | self.putText(display, msg, (2 * width, self.y(1))) 369 | # TODO dim is never set anywhere. Supposed to be observed chessboard size? 370 | if drawable.dim != -1: 371 | self.putText(display, "dim", (2 * width, self.y(2))) 372 | self.putText(display, "%.3f" % drawable.dim, (2 * width, self.y(3))) 373 | 374 | self.queue_display.append(display) 375 | 376 | if drawable.extra_image is not None: 377 | self.queue_extra_display.append(drawable.extra_image) 378 | 379 | 380 | def main(): 381 | from optparse import OptionParser, OptionGroup 382 | parser = OptionParser("%prog --size SIZE1 --square SQUARE1 [ --size SIZE2 --square SQUARE2 ]", 383 | description=None) 384 | parser.add_option("-c", "--camera_name", 385 | type="string", default='narrow_stereo', 386 | help="name of the camera to appear in the calibration file") 387 | group = OptionGroup(parser, "Chessboard Options", 388 | "You must specify one or more chessboards as pairs of --size and --square options.") 389 | group.add_option("-p", "--pattern", 390 | type="string", default="chessboard", 391 | help="calibration pattern to detect - 'chessboard', 'circles', 'acircles'") 392 | group.add_option("-s", "--size", 393 | action="append", default=[], 394 | help="chessboard size as NxM, counting interior corners (e.g. a standard chessboard is 7x7)") 395 | group.add_option("-q", "--square", 396 | action="append", default=[], 397 | help="chessboard square size in meters") 398 | parser.add_option_group(group) 399 | group = OptionGroup(parser, "ROS Communication Options") 400 | group.add_option("--approximate", 401 | type="float", default=0.0, 402 | help="allow specified slop (in seconds) when pairing images from unsynchronized stereo cameras") 403 | group.add_option("--no-service-check", 404 | action="store_false", dest="service_check", default=True, 405 | help="disable check for set_camera_info services at startup") 406 | parser.add_option_group(group) 407 | group = OptionGroup(parser, "Calibration Optimizer Options") 408 | group.add_option("--fix-principal-point", 409 | action="store_true", default=False, 410 | help="fix the principal point at the image center") 411 | group.add_option("--fix-aspect-ratio", 412 | action="store_true", default=False, 413 | help="enforce focal lengths (fx, fy) are equal") 414 | group.add_option("--zero-tangent-dist", 415 | action="store_true", default=False, 416 | help="set tangential distortion coefficients (p1, p2) to zero") 417 | group.add_option("-k", "--k-coefficients", 418 | type="int", default=2, metavar="NUM_COEFFS", 419 | help="number of radial distortion coefficients to use (up to 6, default %default)") 420 | group.add_option("--disable_calib_cb_fast_check", action='store_true', default=False, 421 | help="uses the CALIB_CB_FAST_CHECK flag for findChessboardCorners") 422 | parser.add_option_group(group) 423 | options, args = parser.parse_args() 424 | 425 | if len(options.size) != len(options.square): 426 | parser.error("Number of size and square inputs must be the same!") 427 | 428 | if not options.square: 429 | options.square.append("0.108") 430 | options.size.append("8x6") 431 | 432 | boards = [] 433 | for (sz, sq) in zip(options.size, options.square): 434 | size = tuple([int(c) for c in sz.split('x')]) 435 | boards.append(ChessboardInfo(size[0], size[1], float(sq))) 436 | 437 | if options.approximate == 0.0: 438 | sync = message_filters.TimeSynchronizer 439 | else: 440 | sync = functools.partial(ApproximateTimeSynchronizer, slop=options.approximate) 441 | 442 | num_ks = options.k_coefficients 443 | 444 | calib_flags = 0 445 | if options.fix_principal_point: 446 | calib_flags |= cv2.CALIB_FIX_PRINCIPAL_POINT 447 | if options.fix_aspect_ratio: 448 | calib_flags |= cv2.CALIB_FIX_ASPECT_RATIO 449 | if options.zero_tangent_dist: 450 | calib_flags |= cv2.CALIB_ZERO_TANGENT_DIST 451 | if (num_ks > 3): 452 | calib_flags |= cv2.CALIB_RATIONAL_MODEL 453 | if (num_ks < 6): 454 | calib_flags |= cv2.CALIB_FIX_K6 455 | if (num_ks < 5): 456 | calib_flags |= cv2.CALIB_FIX_K5 457 | if (num_ks < 4): 458 | calib_flags |= cv2.CALIB_FIX_K4 459 | if (num_ks < 3): 460 | calib_flags |= cv2.CALIB_FIX_K3 461 | if (num_ks < 2): 462 | calib_flags |= cv2.CALIB_FIX_K2 463 | if (num_ks < 1): 464 | calib_flags |= cv2.CALIB_FIX_K1 465 | 466 | pattern = Patterns.Chessboard 467 | if options.pattern == 'circles': 468 | pattern = Patterns.Circles 469 | elif options.pattern == 'acircles': 470 | pattern = Patterns.ACircles 471 | elif options.pattern != 'chessboard': 472 | print('Unrecognized pattern %s, defaulting to chessboard' % options.pattern) 473 | 474 | if options.disable_calib_cb_fast_check: 475 | checkerboard_flags = 0 476 | else: 477 | checkerboard_flags = cv2.CALIB_CB_FAST_CHECK 478 | 479 | rospy.init_node('cameracalibrator') 480 | node = OpenCVCalibrationNode(boards, options.service_check, sync, calib_flags, pattern, options.camera_name, 481 | checkerboard_flags=checkerboard_flags) 482 | rospy.spin() 483 | 484 | if __name__ == "__main__": 485 | try: 486 | main() 487 | except Exception as e: 488 | import traceback 489 | traceback.print_exc() 490 | -------------------------------------------------------------------------------- /scripts/cameracheck.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Software License Agreement (BSD License) 4 | # 5 | # Copyright (c) 2009, Willow Garage, Inc. 6 | # All rights reserved. 7 | # 8 | # Redistribution and use in source and binary forms, with or without 9 | # modification, are permitted provided that the following conditions 10 | # are met: 11 | # 12 | # * Redistributions of source code must retain the above copyright 13 | # notice, this list of conditions and the following disclaimer. 14 | # * Redistributions in binary form must reproduce the above 15 | # copyright notice, this list of conditions and the following 16 | # disclaimer in the documentation and/or other materials provided 17 | # with the distribution. 18 | # * Neither the name of the Willow Garage nor the names of its 19 | # contributors may be used to endorse or promote products derived 20 | # from this software without specific prior written permission. 21 | # 22 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 23 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 24 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 25 | # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE 26 | # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, 27 | # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 28 | # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 | # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 30 | # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 31 | # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN 32 | # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 33 | # POSSIBILITY OF SUCH DAMAGE. 34 | 35 | import rospy 36 | import sensor_msgs.msg 37 | import sensor_msgs.srv 38 | import cv_bridge 39 | 40 | import math 41 | import os 42 | import sys 43 | import operator 44 | import time 45 | try: 46 | from queue import Queue 47 | except ImportError: 48 | from Queue import Queue 49 | import threading 50 | import tarfile 51 | 52 | import cv2 53 | 54 | import message_filters 55 | import image_geometry 56 | import numpy 57 | 58 | from camera_calibration.calibrator import MonoCalibrator, StereoCalibrator, ChessboardInfo 59 | 60 | def mean(seq): 61 | return sum(seq) / len(seq) 62 | 63 | def lmin(seq1, seq2): 64 | """ Pairwise minimum of two sequences """ 65 | return [min(a, b) for (a, b) in zip(seq1, seq2)] 66 | 67 | def lmax(seq1, seq2): 68 | """ Pairwise maximum of two sequences """ 69 | return [max(a, b) for (a, b) in zip(seq1, seq2)] 70 | 71 | class ConsumerThread(threading.Thread): 72 | def __init__(self, queue, function): 73 | threading.Thread.__init__(self) 74 | self.queue = queue 75 | self.function = function 76 | 77 | def run(self): 78 | while not rospy.is_shutdown(): 79 | while not rospy.is_shutdown(): 80 | m = self.queue.get() 81 | if self.queue.empty(): 82 | break 83 | self.function(m) 84 | 85 | class CameraCheckerNode: 86 | 87 | def __init__(self, chess_size, dim): 88 | self.board = ChessboardInfo() 89 | self.board.n_cols = chess_size[0] 90 | self.board.n_rows = chess_size[1] 91 | self.board.dim = dim 92 | 93 | image_topic = rospy.resolve_name("monocular") + "/image_rect" 94 | camera_topic = rospy.resolve_name("monocular") + "/camera_info" 95 | 96 | tosync_mono = [ 97 | (image_topic, sensor_msgs.msg.Image), 98 | (camera_topic, sensor_msgs.msg.CameraInfo), 99 | ] 100 | 101 | tsm = message_filters.TimeSynchronizer([message_filters.Subscriber(topic, type) for (topic, type) in tosync_mono], 10) 102 | tsm.registerCallback(self.queue_monocular) 103 | 104 | left_topic = rospy.resolve_name("stereo") + "/left/image_rect" 105 | left_camera_topic = rospy.resolve_name("stereo") + "/left/camera_info" 106 | right_topic = rospy.resolve_name("stereo") + "/right/image_rect" 107 | right_camera_topic = rospy.resolve_name("stereo") + "/right/camera_info" 108 | 109 | tosync_stereo = [ 110 | (left_topic, sensor_msgs.msg.Image), 111 | (left_camera_topic, sensor_msgs.msg.CameraInfo), 112 | (right_topic, sensor_msgs.msg.Image), 113 | (right_camera_topic, sensor_msgs.msg.CameraInfo) 114 | ] 115 | 116 | tss = message_filters.TimeSynchronizer([message_filters.Subscriber(topic, type) for (topic, type) in tosync_stereo], 10) 117 | tss.registerCallback(self.queue_stereo) 118 | 119 | self.br = cv_bridge.CvBridge() 120 | 121 | self.q_mono = Queue() 122 | self.q_stereo = Queue() 123 | 124 | mth = ConsumerThread(self.q_mono, self.handle_monocular) 125 | mth.setDaemon(True) 126 | mth.start() 127 | 128 | sth = ConsumerThread(self.q_stereo, self.handle_stereo) 129 | sth.setDaemon(True) 130 | sth.start() 131 | 132 | self.mc = MonoCalibrator([self.board]) 133 | self.sc = StereoCalibrator([self.board]) 134 | 135 | def queue_monocular(self, msg, cmsg): 136 | self.q_mono.put((msg, cmsg)) 137 | 138 | def queue_stereo(self, lmsg, lcmsg, rmsg, rcmsg): 139 | self.q_stereo.put((lmsg, lcmsg, rmsg, rcmsg)) 140 | 141 | def mkgray(self, msg): 142 | return self.mc.mkgray(msg) 143 | 144 | def image_corners(self, im): 145 | (ok, corners, b) = self.mc.get_corners(im) 146 | if ok: 147 | return corners 148 | else: 149 | return None 150 | 151 | def handle_monocular(self, msg): 152 | 153 | (image, camera) = msg 154 | gray = self.mkgray(image) 155 | C = self.image_corners(gray) 156 | if C is not None: 157 | linearity_rms = self.mc.linear_error(C, self.board) 158 | 159 | # Add in reprojection check 160 | image_points = C 161 | object_points = self.mc.mk_object_points([self.board], use_board_size=True)[0] 162 | dist_coeffs = numpy.zeros((4, 1)) 163 | camera_matrix = numpy.array( [ [ camera.P[0], camera.P[1], camera.P[2] ], 164 | [ camera.P[4], camera.P[5], camera.P[6] ], 165 | [ camera.P[8], camera.P[9], camera.P[10] ] ] ) 166 | ok, rot, trans = cv2.solvePnP(object_points, image_points, camera_matrix, dist_coeffs) 167 | # Convert rotation into a 3x3 Rotation Matrix 168 | rot3x3, _ = cv2.Rodrigues(rot) 169 | # Reproject model points into image 170 | object_points_world = numpy.asmatrix(rot3x3) * numpy.asmatrix(object_points.squeeze().T) + numpy.asmatrix(trans) 171 | reprojected_h = camera_matrix * object_points_world 172 | reprojected = (reprojected_h[0:2, :] / reprojected_h[2, :]) 173 | reprojection_errors = image_points.squeeze().T - reprojected 174 | 175 | reprojection_rms = numpy.sqrt(numpy.sum(numpy.array(reprojection_errors) ** 2) / numpy.product(reprojection_errors.shape)) 176 | 177 | # Print the results 178 | print("Linearity RMS Error: %.3f Pixels Reprojection RMS Error: %.3f Pixels" % (linearity_rms, reprojection_rms)) 179 | else: 180 | print('no chessboard') 181 | 182 | def handle_stereo(self, msg): 183 | 184 | (lmsg, lcmsg, rmsg, rcmsg) = msg 185 | lgray = self.mkgray(lmsg) 186 | rgray = self.mkgray(rmsg) 187 | 188 | L = self.image_corners(lgray) 189 | R = self.image_corners(rgray) 190 | if L and R: 191 | epipolar = self.sc.epipolar_error(L, R) 192 | 193 | dimension = self.sc.chessboard_size(L, R, [self.board], msg=(lcmsg, rcmsg)) 194 | 195 | print("epipolar error: %f pixels dimension: %f m" % (epipolar, dimension)) 196 | else: 197 | print("no chessboard") 198 | 199 | def main(): 200 | from optparse import OptionParser 201 | rospy.init_node('cameracheck') 202 | parser = OptionParser() 203 | parser.add_option("-s", "--size", default="8x6", help="specify chessboard size as nxm [default: %default]") 204 | parser.add_option("-q", "--square", default=".108", help="specify chessboard square size in meters [default: %default]") 205 | options, args = parser.parse_args() 206 | size = tuple([int(c) for c in options.size.split('x')]) 207 | dim = float(options.square) 208 | CameraCheckerNode(size, dim) 209 | rospy.spin() 210 | 211 | if __name__ == "__main__": 212 | main() 213 | -------------------------------------------------------------------------------- /scripts/kalibr_camera_focus.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | print "importing libraries" 3 | import rospy 4 | import cv2 5 | import cv 6 | import numpy as np 7 | from sensor_msgs.msg import Image 8 | from cv_bridge import CvBridge, CvBridgeError 9 | import argparse 10 | 11 | class CameraFocus: 12 | def __init__(self, topic): 13 | self.topic = topic 14 | self.windowNameOrig = "Camera: {0}".format(self.topic) 15 | cv2.namedWindow(self.windowNameOrig, 2) 16 | self.bridge = CvBridge() 17 | self.image_sub = rospy.Subscriber(self.topic, Image, self.callback) 18 | 19 | def callback(self, msg): 20 | #convert image to opencv 21 | try: 22 | cv_image = self.bridge.imgmsg_to_cv2(msg) 23 | np_image= np.array(cv_image) 24 | except CvBridgeError, e: 25 | print "Could not convert ros message to opencv image: ", e 26 | return 27 | 28 | #calculate the fft magnitude 29 | img_float32 = np.float32(np_image) 30 | dft = cv2.dft(img_float32, flags = cv2.DFT_COMPLEX_OUTPUT) 31 | dft_shift = np.fft.fftshift(dft) 32 | magnitude_spectrum = cv2.magnitude(dft_shift[:,:,0],dft_shift[:,:,1]) 33 | 34 | #normalize 35 | magnitude_spectrum_normalized = magnitude_spectrum / np.sum(magnitude_spectrum) 36 | 37 | #frequency domain entropy (-> Entropy Based Measure of Camera Focus. Matej Kristan, Franjo Pernu. University of Ljubljana. Slovenia) 38 | fde = np.sum( magnitude_spectrum_normalized * np.log(magnitude_spectrum_normalized) ) 39 | 40 | y = 20; x = 20 41 | text = "fde: {0} (minimize this for focus)".format(np.sum(fde)) 42 | np_image = cv2.cvtColor(np_image, cv2.COLOR_GRAY2BGR) 43 | cv2.putText(np_image, text, (x,y), cv2.FONT_HERSHEY_SIMPLEX, fontScale=0.5, color=(0, 0, 255), thickness=2) 44 | cv2.imshow(self.windowNameOrig, np_image) 45 | cv2.waitKey(10) 46 | 47 | if __name__ == "__main__": 48 | parser = argparse.ArgumentParser(description='Validate the intrinsics of a camera.') 49 | parser.add_argument('--topic', nargs='+', dest='topics', help='camera topic', required=True) 50 | parsed = parser.parse_args() 51 | 52 | for topic in parsed.topics: 53 | camval = CameraFocus(topic) 54 | 55 | rospy.init_node('kalibr_validator', anonymous=True) 56 | try: 57 | rospy.spin() 58 | except KeyboardInterrupt: 59 | print "Shutting down" 60 | 61 | cv.DestroyAllWindows() 62 | -------------------------------------------------------------------------------- /scripts/publish_calib_file.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import rospy 3 | import glob 4 | import cv2 5 | import sys 6 | 7 | from sensor_msgs.msg import Image 8 | from cv_bridge import CvBridge, CvBridgeError 9 | 10 | def main(args): 11 | rospy.init_node("publish_calib_file", args) 12 | 13 | files = glob.glob("left-*.png"); 14 | files.sort() 15 | 16 | print("All {} files".format(len(files))) 17 | 18 | image_pub = rospy.Publisher("image",Image, queue_size=10) 19 | bridge = CvBridge(); 20 | 21 | 22 | for f in files: 23 | if rospy.is_shutdown(): 24 | break 25 | raw_input("Publish {}?".format(f)) 26 | img = cv2.imread(f, 0) 27 | image_pub.publish(bridge.cv2_to_imgmsg(img, "mono8")) 28 | 29 | if __name__ == "__main__": 30 | main(sys.argv) 31 | -------------------------------------------------------------------------------- /scripts/tarfile_calibration.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # Software License Agreement (BSD License) 4 | # 5 | # Copyright (c) 2009, Willow Garage, Inc. 6 | # All rights reserved. 7 | # 8 | # Redistribution and use in source and binary forms, with or without 9 | # modification, are permitted provided that the following conditions 10 | # are met: 11 | # 12 | # * Redistributions of source code must retain the above copyright 13 | # notice, this list of conditions and the following disclaimer. 14 | # * Redistributions in binary form must reproduce the above 15 | # copyright notice, this list of conditions and the following 16 | # disclaimer in the documentation and/or other materials provided 17 | # with the distribution. 18 | # * Neither the name of the Willow Garage nor the names of its 19 | # contributors may be used to endorse or promote products derived 20 | # from this software without specific prior written permission. 21 | # 22 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 23 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 24 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 25 | # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE 26 | # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, 27 | # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 28 | # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 | # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 30 | # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 31 | # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN 32 | # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 33 | # POSSIBILITY OF SUCH DAMAGE. 34 | 35 | import os 36 | import sys 37 | import numpy 38 | 39 | import cv2 40 | import cv_bridge 41 | import tarfile 42 | 43 | from camera_calibration.calibrator import MonoCalibrator, StereoCalibrator, CalibrationException, ChessboardInfo 44 | 45 | import rospy 46 | import sensor_msgs.srv 47 | 48 | def waitkey(): 49 | k = cv2.waitKey(6) 50 | return k 51 | 52 | def display(win_name, img): 53 | cv2.namedWindow(win_name, cv2.WINDOW_NORMAL) 54 | cv2.imshow( win_name, numpy.asarray( img[:,:] )) 55 | k=-1 56 | while k ==-1: 57 | k=waitkey() 58 | cv2.destroyWindow(win_name) 59 | if k in [27, ord('q')]: 60 | rospy.signal_shutdown('Quit') 61 | 62 | 63 | def cal_from_tarfile(boards, tarname, mono = False, upload = False, calib_flags = 0, visualize = False, alpha=1.0): 64 | if mono: 65 | calibrator = MonoCalibrator(boards, calib_flags) 66 | else: 67 | calibrator = StereoCalibrator(boards, calib_flags) 68 | 69 | calibrator.do_tarfile_calibration(tarname) 70 | 71 | print(calibrator.ost()) 72 | 73 | if upload: 74 | info = calibrator.as_message() 75 | if mono: 76 | set_camera_info_service = rospy.ServiceProxy("%s/set_camera_info" % rospy.remap_name("camera"), sensor_msgs.srv.SetCameraInfo) 77 | 78 | response = set_camera_info_service(info) 79 | if not response.success: 80 | raise RuntimeError("connected to set_camera_info service, but failed setting camera_info") 81 | else: 82 | set_left_camera_info_service = rospy.ServiceProxy("%s/set_camera_info" % rospy.remap_name("left_camera"), sensor_msgs.srv.SetCameraInfo) 83 | set_right_camera_info_service = rospy.ServiceProxy("%s/set_camera_info" % rospy.remap_name("right_camera"), sensor_msgs.srv.SetCameraInfo) 84 | 85 | response1 = set_left_camera_info_service(info[0]) 86 | response2 = set_right_camera_info_service(info[1]) 87 | if not (response1.success and response2.success): 88 | raise RuntimeError("connected to set_camera_info service, but failed setting camera_info") 89 | 90 | if visualize: 91 | 92 | #Show rectified images 93 | calibrator.set_alpha(alpha) 94 | 95 | archive = tarfile.open(tarname, 'r') 96 | if mono: 97 | for f in archive.getnames(): 98 | if f.startswith('left') and (f.endswith('.pgm') or f.endswith('png')): 99 | filedata = archive.extractfile(f).read() 100 | file_bytes = numpy.asarray(bytearray(filedata), dtype=numpy.uint8) 101 | im=cv2.imdecode(file_bytes,cv2.CV_LOAD_IMAGE_COLOR) 102 | 103 | bridge = cv_bridge.CvBridge() 104 | try: 105 | msg=bridge.cv2_to_imgmsg(im, "bgr8") 106 | except cv_bridge.CvBridgeError as e: 107 | print(e) 108 | 109 | #handle msg returns the recitifed image with corner detection once camera is calibrated. 110 | drawable=calibrator.handle_msg(msg) 111 | vis=numpy.asarray( drawable.scrib[:,:]) 112 | #Display. Name of window:f 113 | display(f, vis) 114 | else: 115 | limages = [ f for f in archive.getnames() if (f.startswith('left') and (f.endswith('pgm') or f.endswith('png'))) ] 116 | limages.sort() 117 | rimages = [ f for f in archive.getnames() if (f.startswith('right') and (f.endswith('pgm') or f.endswith('png'))) ] 118 | rimages.sort() 119 | 120 | if not len(limages) == len(rimages): 121 | raise RuntimeError("Left, right images don't match. %d left images, %d right" % (len(limages), len(rimages))) 122 | 123 | for i in range(len(limages)): 124 | l=limages[i] 125 | r=rimages[i] 126 | 127 | if l.startswith('left') and (l.endswith('.pgm') or l.endswith('png')) and r.startswith('right') and (r.endswith('.pgm') or r.endswith('png')): 128 | # LEFT IMAGE 129 | filedata = archive.extractfile(l).read() 130 | file_bytes = numpy.asarray(bytearray(filedata), dtype=numpy.uint8) 131 | im_left=cv2.imdecode(file_bytes,cv2.CV_LOAD_IMAGE_COLOR) 132 | 133 | bridge = cv_bridge.CvBridge() 134 | try: 135 | msg_left=bridge.cv2_to_imgmsg(im_left, "bgr8") 136 | except cv_bridge.CvBridgeError as e: 137 | print(e) 138 | 139 | #RIGHT IMAGE 140 | filedata = archive.extractfile(r).read() 141 | file_bytes = numpy.asarray(bytearray(filedata), dtype=numpy.uint8) 142 | im_right=cv2.imdecode(file_bytes,cv2.CV_LOAD_IMAGE_COLOR) 143 | try: 144 | msg_right=bridge.cv2_to_imgmsg(im_right, "bgr8") 145 | except cv_bridge.CvBridgeError as e: 146 | print(e) 147 | 148 | drawable=calibrator.handle_msg([ msg_left,msg_right] ) 149 | 150 | h, w = numpy.asarray(drawable.lscrib[:,:]).shape[:2] 151 | vis = numpy.zeros((h, w*2, 3), numpy.uint8) 152 | vis[:h, :w ,:] = numpy.asarray(drawable.lscrib[:,:]) 153 | vis[:h, w:w*2, :] = numpy.asarray(drawable.rscrib[:,:]) 154 | 155 | display(l+" "+r,vis) 156 | 157 | 158 | if __name__ == '__main__': 159 | from optparse import OptionParser 160 | parser = OptionParser("%prog TARFILE [ opts ]") 161 | parser.add_option("--mono", default=False, action="store_true", dest="mono", 162 | help="Monocular calibration only. Calibrates left images.") 163 | parser.add_option("-s", "--size", default=[], action="append", dest="size", 164 | help="specify chessboard size as NxM [default: 8x6]") 165 | parser.add_option("-q", "--square", default=[], action="append", dest="square", 166 | help="specify chessboard square size in meters [default: 0.108]") 167 | parser.add_option("--upload", default=False, action="store_true", dest="upload", 168 | help="Upload results to camera(s).") 169 | parser.add_option("--fix-principal-point", action="store_true", default=False, 170 | help="fix the principal point at the image center") 171 | parser.add_option("--fix-aspect-ratio", action="store_true", default=False, 172 | help="enforce focal lengths (fx, fy) are equal") 173 | parser.add_option("--zero-tangent-dist", action="store_true", default=False, 174 | help="set tangential distortion coefficients (p1, p2) to zero") 175 | parser.add_option("-k", "--k-coefficients", type="int", default=2, metavar="NUM_COEFFS", 176 | help="number of radial distortion coefficients to use (up to 6, default %default)") 177 | parser.add_option("--visualize", action="store_true", default=False, 178 | help="visualize rectified images after calibration") 179 | parser.add_option("-a", "--alpha", type="float", default=1.0, metavar="ALPHA", 180 | help="zoom for visualization of rectifies images. Ranges from 0 (zoomed in, all pixels in calibrated image are valid) to 1 (zoomed out, all pixels in original image are in calibrated image). default %default)") 181 | 182 | options, args = parser.parse_args() 183 | 184 | if len(options.size) != len(options.square): 185 | parser.error("Number of size and square inputs must be the same!") 186 | 187 | if not options.square: 188 | options.square.append("0.108") 189 | options.size.append("8x6") 190 | 191 | boards = [] 192 | for (sz, sq) in zip(options.size, options.square): 193 | info = ChessboardInfo() 194 | info.dim = float(sq) 195 | size = tuple([int(c) for c in sz.split('x')]) 196 | info.n_cols = size[0] 197 | info.n_rows = size[1] 198 | 199 | boards.append(info) 200 | 201 | if not boards: 202 | parser.error("Must supply at least one chessboard") 203 | 204 | if not args: 205 | parser.error("Must give tarfile name") 206 | if not os.path.exists(args[0]): 207 | parser.error("Tarfile %s does not exist. Please select valid tarfile" % args[0]) 208 | 209 | tarname = args[0] 210 | 211 | num_ks = options.k_coefficients 212 | 213 | calib_flags = 0 214 | if options.fix_principal_point: 215 | calib_flags |= cv2.CALIB_FIX_PRINCIPAL_POINT 216 | if options.fix_aspect_ratio: 217 | calib_flags |= cv2.CALIB_FIX_ASPECT_RATIO 218 | if options.zero_tangent_dist: 219 | calib_flags |= cv2.CALIB_ZERO_TANGENT_DIST 220 | if (num_ks > 3): 221 | calib_flags |= cv2.CALIB_RATIONAL_MODEL 222 | if (num_ks < 6): 223 | calib_flags |= cv2.CALIB_FIX_K6 224 | if (num_ks < 5): 225 | calib_flags |= cv2.CALIB_FIX_K5 226 | if (num_ks < 4): 227 | calib_flags |= cv2.CALIB_FIX_K4 228 | if (num_ks < 3): 229 | calib_flags |= cv2.CALIB_FIX_K3 230 | if (num_ks < 2): 231 | calib_flags |= cv2.CALIB_FIX_K2 232 | if (num_ks < 1): 233 | calib_flags |= cv2.CALIB_FIX_K1 234 | 235 | cal_from_tarfile(boards, tarname, options.mono, options.upload, calib_flags, options.visualize, options.alpha) 236 | --------------------------------------------------------------------------------