├── LICENSE-2.0.txt ├── README.md ├── asymmetric-pattern.png ├── blender-import1.jpg ├── blender-import2.jpg ├── c920-asymmetric ├── my_photo-10.jpg ├── my_photo-11.jpg ├── my_photo-3.jpg ├── my_photo-4.jpg ├── my_photo-5.jpg ├── my_photo-6.jpg ├── my_photo-7.jpg └── my_photo-9.jpg ├── c920-symmetric ├── my_photo-3.jpg ├── my_photo-4.jpg ├── my_photo-5.jpg ├── my_photo-6.jpg ├── my_photo-7.jpg ├── my_photo-8.jpg └── my_photo-9.jpg ├── calibrate.py ├── camera-calibration-checker-board_9x7.pdf ├── canon-efs-24mm-crop1.6 ├── IMG_0038.JPG ├── IMG_0039.JPG ├── IMG_0040.JPG ├── IMG_0041.JPG ├── IMG_0042.JPG ├── IMG_0043.JPG ├── IMG_0044.JPG ├── calib.json └── go.sh ├── import_calibration_data.py ├── pattern.png ├── ref.py └── sample ├── frame-0.png ├── frame-1.png ├── frame-10.png ├── frame-11.png ├── frame-12.png ├── frame-2.png ├── frame-3.png ├── frame-4.png ├── frame-5.png ├── frame-6.png ├── frame-7.png ├── frame-8.png ├── frame-9.png └── readme.txt /LICENSE-2.0.txt: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright 2021 Paul Melis 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OpenCV camera calibration scripts 2 | 3 | Whenever I end up using OpenCV for some task that involves computer vision 4 | the necessary camera calibration step is always a bit messy, due to having to 5 | gather the necessary Python code to run the calibration (and never being really 6 | sure the values are correct). Here, I tried to gather some useful scripts for 7 | that task, both for myself, but also for others to use. 8 | 9 | Two scripts are included: 10 | 11 | - `calibrate.py`: takes a set of input images and parameters and uses OpenCV to 12 | compute the camera calibration. The computed parameters can be saved to JSON, 13 | plus it allows writing debug images to see the detection of the chessboard used. 14 | - `import_calibration_data.py`: reads a JSON calibration file and the set of images 15 | and constructs a Blender scene that creates a camera per image with the correct 16 | transform and background image. 17 | 18 | Limitations: 19 | 20 | - Only supports the checkerboard calibration pattern, two versions included, a 21 | symmetrical and an asymmetrical one. The asymmetrical version has the advantage 22 | of leading to consistent chessboard orientations being detected (all computed 23 | camera positions on the same side of the chessboard). 24 | - Tested with OpenCV 4.5.3 under Python 3.9.6 and OpenCV 4.8.0 under Python 3.11.5. 25 | It probably will work with other versions, but it also might not. 26 | 27 | Dependencies: 28 | 29 | - Python 3.x 30 | - OpenCV with Python module installed 31 | - NumPy 32 | - Blender 2.9x (for `import_calibration_data.py` only) 33 | 34 | ## Example usage (calibration) 35 | 36 | Using the included test set in `canon-efs-24mm-crop1.6`: 37 | 38 | ``` 39 | $ cd canon-efs-24mm-crop1.6 40 | # 9x6 inner corners in the chessboard, 2.44 cm between corners, 22.3x14.9 mm sensor size on the camera 41 | $ ../calibrate.py -c 9x6 -s 0.0244 -j calib.json -S 22.3x14.9 -d debug *.JPG 42 | Image resolution 5184x3456 43 | Processing 7 images using 4 threads ....... done 44 | 45 | Found chessboards in 7 out of 7 images 46 | 47 | Calibrating camera using 7 images 48 | RMS: 1.18165472725028 49 | 50 | Camera matrix: 51 | [[5.86740909e+03 0.00000000e+00 2.59146647e+03] 52 | [0.00000000e+00 5.87687664e+03 1.72775387e+03] 53 | [0.00000000e+00 0.00000000e+00 1.00000000e+00]] 54 | 55 | Distortion coefficients: 56 | [-0.12291868 0.17975773 -0.00165213 -0.00162771 -0.94774414] 57 | 58 | Computing reprojection error: 59 | [IMG_0038.JPG] 0.179007 60 | [IMG_0039.JPG] 0.173196 61 | [IMG_0040.JPG] 0.173777 62 | [IMG_0041.JPG] 0.159977 63 | [IMG_0042.JPG] 0.152262 64 | [IMG_0043.JPG] 0.128836 65 | [IMG_0044.JPG] 0.152934 66 | 67 | Average reprojection error: 0.159998 +/- 0.016064 68 | 69 | FOV: 47.668066 32.770231 degrees 70 | Focal length: 25.239819 mm 71 | Principal point: 11.147705 7.448939 mm 72 | Aspect ratio: 1.001614 73 | 74 | [IMG_0038.JPG] rotation (-0.630170, -0.100325, -0.255343), translation (-0.114111, -0.036871, 0.408347) 75 | [IMG_0039.JPG] rotation (-0.658240, -0.084628, -0.160000), translation (-0.109105, -0.040644, 0.426521) 76 | [IMG_0040.JPG] rotation (-0.677469, 0.150810, 0.345571), translation (-0.082676, -0.082905, 0.500609) 77 | [IMG_0041.JPG] rotation (-0.731692, 0.299701, 0.657561), translation (-0.047673, -0.093889, 0.576458) 78 | [IMG_0042.JPG] rotation (-0.602255, -0.013499, 0.000046), translation (-0.098538, -0.058739, 0.463666) 79 | [IMG_0043.JPG] rotation (-0.039782, -0.004438, 0.005338), translation (-0.098927, -0.057666, 0.430095) 80 | [IMG_0044.JPG] rotation (-0.818372, -0.007559, 0.001030), translation (-0.094439, -0.047858, 0.528516) 81 | 82 | Writing undistorted images to debug directory: 83 | IMG_0038.JPG 84 | IMG_0039.JPG 85 | IMG_0040.JPG 86 | IMG_0041.JPG 87 | IMG_0042.JPG 88 | IMG_0043.JPG 89 | IMG_0044.JPG 90 | ``` 91 | 92 | As can be seen the chessboard pattern is detected in all 7 input images. The 93 | reconstructed calibration is quite good, at an average reprojection error of 0.16 pixels. 94 | 95 | By setting the physical sensor size of the 60D camera used with the `-S` option the 96 | FOV and focal length can be computed in physical units (instead of only in pixels, 97 | as given by the camera matrix). The derived focal length is 25.2mm, which is pretty close 98 | to the actual 24mm of the lens. 99 | 100 | The `calib.json` file will contain all the computed calibration parameters, plus 101 | the transformation computed for each image (`chessboard_orientations`). 102 | Most of the input values are also stored in the JSON file, for completeness. 103 | 104 | Due to the `-d debug` option the `debug` directory will contain three images for 105 | each input image in which the chessboard was detected: a colored line overlay showing 106 | the detected chessboard, the undistored image based on the detected camera parameters (both 107 | in uncropped and cropped versions). 108 | 109 | ## Example usage (Blender scene) 110 | 111 | ``` 112 | $ cd canon-efs-24mm-crop1.6 113 | $ blender -P ../import_calibration_data.py -- calib.json 114 | ``` 115 | 116 | This will read the calibration JSON file and construct a scene with: 117 | 118 | - The chessboard corners as a set of vertices (the `chessboard corners` mesh) 119 | - A quad textured as a chessboard (the `chessboard` mesh), which is hidden by default 120 | - A camera for each image in which the chessboard was detected, transformed 121 | to match the detected orientation and position. The camera parameters will 122 | have been set to match the detected values from the calibration JSON file. 123 | 124 | Here you can see the scene for the included test set: 125 | 126 | ![](blender-import1.jpg) 127 | 128 | The chessboard corner vertices are shown in orange and one corner is located 129 | in the origin: this is the way the corner coordinates are specified in the 130 | `calibrate.py` script. Note the axes orientation of the view: the chessboard corners 131 | lie in the XY plane, whereas the cameras are all located on the -Z side. This is due 132 | to the raw OpenCV values being used, they're not transformed in any way. 133 | 134 | ![](blender-import2.jpg) 135 | 136 | Here the view of a single camera/image is shown. Note again the chessboard corner 137 | vertices in dark orange, which match very well with the background image of the 138 | selected camera. The camera parameters from the calibration file have been applied 139 | to the Blender camera (horizontal sensor size 22.3 mm, horizontal FOV 47.7 degrees). 140 | 141 | Again, note the axes orientation. By default, an untransformed OpenCV camera 142 | has +X pointing right, +Y pointing *down* and looks down the +Z axis. 143 | As the top-left corner of the chessboard is in view and is located in the origin, 144 | with the other corners along the +X and +Y directions the camera calibration 145 | solution puts the camera on the -Z side. 146 | 147 | -------------------------------------------------------------------------------- /asymmetric-pattern.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/asymmetric-pattern.png -------------------------------------------------------------------------------- /blender-import1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/blender-import1.jpg -------------------------------------------------------------------------------- /blender-import2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/blender-import2.jpg -------------------------------------------------------------------------------- /c920-asymmetric/my_photo-10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-asymmetric/my_photo-10.jpg -------------------------------------------------------------------------------- /c920-asymmetric/my_photo-11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-asymmetric/my_photo-11.jpg -------------------------------------------------------------------------------- /c920-asymmetric/my_photo-3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-asymmetric/my_photo-3.jpg -------------------------------------------------------------------------------- /c920-asymmetric/my_photo-4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-asymmetric/my_photo-4.jpg -------------------------------------------------------------------------------- /c920-asymmetric/my_photo-5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-asymmetric/my_photo-5.jpg -------------------------------------------------------------------------------- /c920-asymmetric/my_photo-6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-asymmetric/my_photo-6.jpg -------------------------------------------------------------------------------- /c920-asymmetric/my_photo-7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-asymmetric/my_photo-7.jpg -------------------------------------------------------------------------------- /c920-asymmetric/my_photo-9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-asymmetric/my_photo-9.jpg -------------------------------------------------------------------------------- /c920-symmetric/my_photo-3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-symmetric/my_photo-3.jpg -------------------------------------------------------------------------------- /c920-symmetric/my_photo-4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-symmetric/my_photo-4.jpg -------------------------------------------------------------------------------- /c920-symmetric/my_photo-5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-symmetric/my_photo-5.jpg -------------------------------------------------------------------------------- /c920-symmetric/my_photo-6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-symmetric/my_photo-6.jpg -------------------------------------------------------------------------------- /c920-symmetric/my_photo-7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-symmetric/my_photo-7.jpg -------------------------------------------------------------------------------- /c920-symmetric/my_photo-8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-symmetric/my_photo-8.jpg -------------------------------------------------------------------------------- /c920-symmetric/my_photo-9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/c920-symmetric/my_photo-9.jpg -------------------------------------------------------------------------------- /calibrate.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import json, os 3 | import numpy as np 4 | import cv2 5 | 6 | def splitfn(fname): 7 | path, fname = os.path.split(fname) 8 | name, ext = os.path.splitext(fname) 9 | return path, name, ext 10 | 11 | def main(image_files, fisheye, pattern_size, square_size, threads, json_file=None, debug_dir=None): 12 | """ 13 | image_files: list of image file names 14 | fisheye: set to True to use fisheye camera model 15 | pattern_size: the number of *inner* points! So for a grid of 10x7 *squares* there's 9x6 inner points 16 | square_size: the real-world dimension of a chessboard square, in meters 17 | threads: number of threads to use 18 | json_file: JSON file to write calibration data to 19 | debug_dir: if set, the path to which debug images with the detected chessboards are written 20 | """ 21 | 22 | # JSON data 23 | j = {} 24 | 25 | # Real-world 3D corner "positions" 26 | pattern_points = np.zeros((np.prod(pattern_size), 3), np.float32) 27 | pattern_points[:, :2] = np.indices(pattern_size).T.reshape(-1, 2) 28 | # https://github.com/opencv/opencv/issues/9150#issuecomment-674664643 29 | pattern_points = np.expand_dims(np.asarray(pattern_points), -2) 30 | pattern_points *= square_size 31 | 32 | j['chessboard_points'] = pattern_points.tolist() 33 | j['chessboard_inner_corners'] = pattern_size 34 | j['chessboard_spacing_m'] = square_size 35 | 36 | # Read first image to get resolution 37 | # TODO: use imquery call to retrieve results 38 | img = cv2.imread(image_files[0], cv2.IMREAD_GRAYSCALE) 39 | if img is None: 40 | print('Failed to read %s to get resolution!' % image_files[0]) 41 | return 42 | 43 | h, w = img.shape[:2] 44 | print('Image resolution %dx%d' % (w, h)) 45 | 46 | j['image_resolution'] = (w, h) 47 | 48 | # Process all images to find chessboards 49 | 50 | def process_image(fname): 51 | sys.stdout.write('.') 52 | sys.stdout.flush() 53 | 54 | img = cv2.imread(fname, 0) 55 | if img is None: 56 | return (fname, 'Failed to load') 57 | 58 | if w != img.shape[1] or h != img.shape[0]: 59 | return (fname, "Size %dx%d doesn't match" % (img.shape[1], img.shape[0])) 60 | 61 | found, corners = cv2.findChessboardCorners(img, pattern_size) 62 | if found: 63 | # Refine corner positions 64 | term = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.1) 65 | cv2.cornerSubPix(img, corners, (5, 5), (-1, -1), term) 66 | 67 | if debug_dir: 68 | # Write image with detected chessboard overlay 69 | vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) 70 | cv2.drawChessboardCorners(vis, pattern_size, corners, found) 71 | _, name, _ = splitfn(fname) 72 | outfile = os.path.join(debug_dir, name + '_chessboard.png') 73 | cv2.imwrite(outfile, vis) 74 | 75 | if not found: 76 | return (fname, 'Chessboard not found') 77 | 78 | return (fname, corners) 79 | 80 | if threads <= 1: 81 | sys.stdout.write('Processing %d images' % len(image_files)) 82 | results = [process_image(fname) for fname in image_files] 83 | else: 84 | sys.stdout.write('Processing %d images using %d threads ' % (len(image_files), threads)) 85 | from multiprocessing.dummy import Pool as ThreadPool 86 | pool = ThreadPool(threads) 87 | results = pool.map(process_image, image_files) 88 | 89 | sys.stdout.write(' done\n') 90 | sys.stdout.flush() 91 | print() 92 | 93 | # Prepare calibration input 94 | 95 | obj_points = [] 96 | img_points = [] 97 | cb_index = 0 98 | cb_to_image_index = {} 99 | 100 | # Sort by file name 101 | results.sort(key = lambda e: e[0]) 102 | for img_index, (fname, corners) in enumerate(results): 103 | if isinstance(corners, str): 104 | print('[%s] Ignoring image: %s' % (fname, corners)) 105 | continue 106 | img_points.append(corners) 107 | obj_points.append(pattern_points) 108 | cb_to_image_index[cb_index] = img_index 109 | cb_index += 1 110 | 111 | num_chessboards = cb_index 112 | 113 | print('Found chessboards in %d out of %d images' % (num_chessboards, len(image_files))) 114 | print() 115 | 116 | if num_chessboards == 0: 117 | print('No chessboards to use! Was the correct chessboard size set using the -c option?') 118 | sys.exit(-1) 119 | 120 | # Calculate camera matrix, distortion, etc 121 | 122 | calibrate_func = cv2.fisheye.calibrate if fisheye else cv2.calibrateCamera 123 | 124 | print('Calibrating camera using %d images' % len(img_points)) 125 | rms, camera_matrix, dist_coefs, rvecs, tvecs = \ 126 | calibrate_func(obj_points, img_points, (w, h), None, None) #, None, None, None) 127 | 128 | print("RMS:", rms) 129 | print() 130 | print("Camera matrix:\n", camera_matrix) 131 | print() 132 | print("Distortion coefficients:\n", dist_coefs.ravel()) 133 | print() 134 | 135 | # Compute reprojection error 136 | # After https://docs.opencv2.org/4.5.2/dc/dbb/tutorial_py_calibration.html 137 | print('Computing reprojection error:') 138 | 139 | project_func = cv2.fisheye.projectPoints if fisheye else cv2.projectPoints 140 | 141 | reprod_error = {} 142 | errors = [] 143 | for cb_index in range(num_chessboards): 144 | img_points2, _ = project_func(obj_points[cb_index], rvecs[cb_index], tvecs[cb_index], camera_matrix, dist_coefs) 145 | error = cv2.norm(img_points[cb_index], img_points2, cv2.NORM_L2) / len(img_points2) 146 | img_index = cb_to_image_index[cb_index] 147 | img_file = image_files[img_index] 148 | print('[%s] %.6f' % (img_file, error)) 149 | reprod_error[img_file] = error 150 | errors.append(error) 151 | reprojection_error_avg = np.average(errors) 152 | reprojection_error_stddev = np.std(errors) 153 | print() 154 | print("Average reprojection error: %.6f +/- %.6f" % (reprojection_error_avg, reprojection_error_stddev)) 155 | 156 | j['camera_matrix'] = camera_matrix.tolist() 157 | j['distortion_coefficients'] = dist_coefs.ravel().tolist() 158 | j['rms'] = rms 159 | j['reprojection_error'] = {'average': reprojection_error_avg, 'stddev': reprojection_error_stddev, 'image': reprod_error } 160 | 161 | if sensor_size is not None: 162 | 163 | fovx, fovy, focal_length, principal_point, aspect_ratio = \ 164 | cv2.calibrationMatrixValues(camera_matrix, (w,h), sensor_size[0], sensor_size[1]) 165 | 166 | print() 167 | print('FOV: %.6f %.6f degrees' % (fovx, fovy)) 168 | print('Focal length: %.6f mm' % focal_length) 169 | print('Principal point: %.6f %.6f mm' % principal_point) 170 | print('Aspect ratio: %.6f' % aspect_ratio) 171 | 172 | j['sensor_size_mm'] = sensor_size 173 | j['fov_degrees'] = (fovx, fovy) 174 | j['focal_length_mm'] = focal_length 175 | j['principal_point_mm'] = principal_point 176 | j['aspect_ratio'] = aspect_ratio 177 | 178 | print() 179 | chessboard_orientations = {} 180 | for cb_index in range(num_chessboards): 181 | img_index = cb_to_image_index[cb_index] 182 | r = rvecs[cb_index] 183 | t = tvecs[cb_index] 184 | print('[%s] rotation (%.6f, %.6f, %.6f), translation (%.6f, %.6f, %.6f)' % \ 185 | (image_files[img_index], r[0][0], r[1][0], r[2][0], t[0][0], t[1][0], t[2][0])) 186 | 187 | rotation_matrix, _ = cv2.Rodrigues(r) 188 | chessboard_orientations[image_files[img_index]] = { 189 | #'rotation_vector': (r[0][0], r[1][0], r[2][0]), 190 | 'rotation_matrix': rotation_matrix.tolist(), 191 | 'translation': (t[0][0], t[1][0], t[2][0]) 192 | } 193 | 194 | # OpenCV untransformed camera orientation is X to the right, Y down, 195 | # Z along the view direction (i.e. right-handed). This aligns X,Y axes 196 | # of pixels in the image plane with the X,Y axes in camera space. 197 | # The orientations describe the transform needed to bring a detected 198 | # chessboard from its object space into camera space. 199 | j['chessboard_orientations'] = chessboard_orientations 200 | 201 | # Write to JSON 202 | 203 | if json_file is not None: 204 | json.dump(j, open(json_file, 'wt')) 205 | 206 | # Undistort the image with the calibration 207 | if debug_dir is not None: 208 | print('') 209 | print('Writing undistorted images to %s directory:' % debug_dir) 210 | 211 | for fname in image_files: 212 | _, name, _ = splitfn(fname) 213 | img_found = os.path.join(debug_dir, name + '_chessboard.png') 214 | outfile1 = os.path.join(debug_dir, name + '_undistorted.png') 215 | outfile2 = os.path.join(debug_dir, name + '_undistorted_cropped.png') 216 | 217 | img = cv2.imread(img_found) 218 | if img is None: 219 | print("Can't find chessboard image!") 220 | continue 221 | 222 | h, w = img.shape[:2] 223 | newcameramtx, roi = cv2.getOptimalNewCameraMatrix(camera_matrix, dist_coefs, (w, h), 1, (w, h)) 224 | 225 | dst = cv2.undistort(img, camera_matrix, dist_coefs, None, newcameramtx) 226 | 227 | # save uncropped 228 | cv2.imwrite(outfile1, dst) 229 | 230 | # crop and save the image 231 | x, y, w, h = roi 232 | dst = dst[y:y+h, x:x+w] 233 | cv2.imwrite(outfile2, dst) 234 | 235 | print(fname) 236 | 237 | cv2.destroyAllWindows() 238 | 239 | if __name__ == '__main__': 240 | 241 | import sys, getopt 242 | from glob import glob 243 | 244 | corners = (9, 6) 245 | debug_dir = None 246 | json_file = None 247 | sensor_size = None 248 | square_size = 0.034 249 | threads = 4 250 | fisheye = False 251 | 252 | # XXX use defaults 253 | def usage(): 254 | print(''' 255 | OpenCV Camera calibration for distorted images with chessboard samples. 256 | Reads distorted images, calculates the calibration and writes undistorted images. 257 | 258 | usage: 259 | calibrate.py [options] ... 260 | 261 | default values: 262 | -c x Number of *inner* corners of the chessboard pattern (default: 9x6) 263 | -f Fit fisheye camera model (default: regular perspective model) 264 | -s Square size in m (default: 0.0225) 265 | -t Number of threads to use (default: 4) 266 | -j Write calibration data to JSON file 267 | -S x Physical sensor size in mm (optional) 268 | -d Write debug images to dir 269 | ''') 270 | 271 | try: 272 | options, args = getopt.getopt(sys.argv[1:], 'c:d:fj:S:s:t:') 273 | except getopt.GetoptError as err: 274 | # print help information and exit: 275 | print(err) # will print something like "option -a not recognized" 276 | usage() 277 | sys.exit(2) 278 | 279 | for o, v in options: 280 | if o == '-c': 281 | corners = tuple(map(int, v.split('x'))) 282 | elif o == '-d': 283 | debug_dir = v 284 | # Guard against option being interpreted as directory name 285 | assert debug_dir[0] != '-' 286 | elif o == '-f': 287 | fisheye = True 288 | elif o == '-j': 289 | json_file = v 290 | elif o == '-S': 291 | sensor_size = tuple(map(float, v.split('x'))) 292 | elif o == '-s': 293 | square_size = float(v) 294 | elif o == '-t': 295 | threads = int(v) 296 | 297 | #print(options) 298 | #print(args) 299 | 300 | if debug_dir and not os.path.isdir(debug_dir): 301 | os.mkdir(debug_dir) 302 | 303 | if len(args) == 0: 304 | print('No images provided!') 305 | usage() 306 | sys.exit(-1) 307 | 308 | image_files = args 309 | 310 | main(image_files, fisheye, corners, square_size, threads, json_file, debug_dir) 311 | -------------------------------------------------------------------------------- /camera-calibration-checker-board_9x7.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/camera-calibration-checker-board_9x7.pdf -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/IMG_0038.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/canon-efs-24mm-crop1.6/IMG_0038.JPG -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/IMG_0039.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/canon-efs-24mm-crop1.6/IMG_0039.JPG -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/IMG_0040.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/canon-efs-24mm-crop1.6/IMG_0040.JPG -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/IMG_0041.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/canon-efs-24mm-crop1.6/IMG_0041.JPG -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/IMG_0042.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/canon-efs-24mm-crop1.6/IMG_0042.JPG -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/IMG_0043.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/canon-efs-24mm-crop1.6/IMG_0043.JPG -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/IMG_0044.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/canon-efs-24mm-crop1.6/IMG_0044.JPG -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/calib.json: -------------------------------------------------------------------------------- 1 | {"chessboard_points": [[[0.0, 0.0, 0.0]], [[0.024399999529123306, 0.0, 0.0]], [[0.04879999905824661, 0.0, 0.0]], [[0.07320000231266022, 0.0, 0.0]], [[0.09759999811649323, 0.0, 0.0]], [[0.12199999392032623, 0.0, 0.0]], [[0.14640000462532043, 0.0, 0.0]], [[0.17080000042915344, 0.0, 0.0]], [[0.19519999623298645, 0.0, 0.0]], [[0.0, 0.024399999529123306, 0.0]], [[0.024399999529123306, 0.024399999529123306, 0.0]], [[0.04879999905824661, 0.024399999529123306, 0.0]], [[0.07320000231266022, 0.024399999529123306, 0.0]], [[0.09759999811649323, 0.024399999529123306, 0.0]], [[0.12199999392032623, 0.024399999529123306, 0.0]], [[0.14640000462532043, 0.024399999529123306, 0.0]], [[0.17080000042915344, 0.024399999529123306, 0.0]], [[0.19519999623298645, 0.024399999529123306, 0.0]], [[0.0, 0.04879999905824661, 0.0]], [[0.024399999529123306, 0.04879999905824661, 0.0]], [[0.04879999905824661, 0.04879999905824661, 0.0]], [[0.07320000231266022, 0.04879999905824661, 0.0]], [[0.09759999811649323, 0.04879999905824661, 0.0]], [[0.12199999392032623, 0.04879999905824661, 0.0]], [[0.14640000462532043, 0.04879999905824661, 0.0]], [[0.17080000042915344, 0.04879999905824661, 0.0]], [[0.19519999623298645, 0.04879999905824661, 0.0]], [[0.0, 0.07320000231266022, 0.0]], [[0.024399999529123306, 0.07320000231266022, 0.0]], [[0.04879999905824661, 0.07320000231266022, 0.0]], [[0.07320000231266022, 0.07320000231266022, 0.0]], [[0.09759999811649323, 0.07320000231266022, 0.0]], [[0.12199999392032623, 0.07320000231266022, 0.0]], [[0.14640000462532043, 0.07320000231266022, 0.0]], [[0.17080000042915344, 0.07320000231266022, 0.0]], [[0.19519999623298645, 0.07320000231266022, 0.0]], [[0.0, 0.09759999811649323, 0.0]], [[0.024399999529123306, 0.09759999811649323, 0.0]], [[0.04879999905824661, 0.09759999811649323, 0.0]], [[0.07320000231266022, 0.09759999811649323, 0.0]], [[0.09759999811649323, 0.09759999811649323, 0.0]], [[0.12199999392032623, 0.09759999811649323, 0.0]], [[0.14640000462532043, 0.09759999811649323, 0.0]], [[0.17080000042915344, 0.09759999811649323, 0.0]], [[0.19519999623298645, 0.09759999811649323, 0.0]], [[0.0, 0.12199999392032623, 0.0]], [[0.024399999529123306, 0.12199999392032623, 0.0]], [[0.04879999905824661, 0.12199999392032623, 0.0]], [[0.07320000231266022, 0.12199999392032623, 0.0]], [[0.09759999811649323, 0.12199999392032623, 0.0]], [[0.12199999392032623, 0.12199999392032623, 0.0]], [[0.14640000462532043, 0.12199999392032623, 0.0]], [[0.17080000042915344, 0.12199999392032623, 0.0]], [[0.19519999623298645, 0.12199999392032623, 0.0]]], "chessboard_inner_corners": [9, 6], "chessboard_spacing_m": 0.0244, "image_resolution": [5184, 3456], "camera_matrix": [[5867.4090921039215, 0.0, 2591.4664707471998], [0.0, 5876.8766363948125, 1727.7538744517396], [0.0, 0.0, 1.0]], "distortion_coefficients": [-0.12291868136395147, 0.17975773439133208, -0.0016521312895706893, -0.0016277147608778576, -0.9477441448214109], "rms": 1.18165472725028, "reprojection_error": {"average": 0.1599984889699704, "stddev": 0.01606351148691206, "image": {"IMG_0038.JPG": 0.17900748896889884, "IMG_0039.JPG": 0.17319618177430446, "IMG_0040.JPG": 0.17377706174186094, "IMG_0041.JPG": 0.15997711514246335, "IMG_0042.JPG": 0.1522620873722632, "IMG_0043.JPG": 0.12883564233138575, "IMG_0044.JPG": 0.1529338454586162}}, "sensor_size_mm": [22.3, 14.9], "fov_degrees": [47.668066111687885, 32.77023060141913], "focal_length_mm": 25.239819204073584, "principal_point_mm": [11.147704918530586, 7.44893886844066], "aspect_ratio": 1.001613581760241, "chessboard_orientations": {"IMG_0038.JPG": {"rotation_matrix": [[0.9638256127801834, 0.2660956484063527, -0.015273966351494084], [-0.20532357930189296, 0.7778003305671171, 0.5940276706958547], [0.16994827428239187, -0.5694029782753918, 0.8042995911969313]], "translation": [-0.11411107344446052, -0.036870542059140046, 0.4083466639013188]}, "IMG_0039.JPG": {"rotation_matrix": [[0.9842454578693873, 0.17464622346781816, -0.027560393536901956], [-0.12107089517814124, 0.7793335140591141, 0.6148016852652863], [0.1288515308571983, -0.6017790046958281, 0.7882000459928019]], "translation": [-0.10910525794452375, -0.040644370590836115, 0.42652093532269764]}, "IMG_0040.JPG": {"rotation_matrix": [[0.9324086570152443, -0.3605510978740046, 0.02483952787148909], [0.26339828272948146, 0.7250077163634803, 0.636384440309921], [-0.2474579580013313, -0.58682767234961, 0.7709720111564038]], "translation": [-0.0826758542246631, -0.08290522518207151, 0.5006094975762476]}, "IMG_0041.JPG": {"rotation_matrix": [[0.761111611502524, -0.6479497114557273, 0.029500614577848056], [0.44731868248208456, 0.5572899062245408, 0.6995240930252041], [-0.4696968289636656, -0.5192197336818799, 0.7139998298436736]], "translation": [-0.047672624532371595, -0.093888538450347, 0.5764577134643116]}, "IMG_0042.JPG": {"rotation_matrix": [[0.9999116075849298, 0.003900077070625872, -0.012710877851846701], [0.00398695129629011, 0.8240631145846876, 0.5664839692350924], [0.012683896730989233, -0.5664845739998815, 0.8239747242384868]], "translation": [-0.0985383418889183, -0.05873942375168862, 0.4636656674393311]}, "IMG_0043.JPG": {"rotation_matrix": [[0.9999759083576717, -0.0052486095520540895, -0.004542554569775528], [0.00542512272314208, 0.9991945483280927, 0.03975956029507437], [0.004330213353652383, -0.03978324633798622, 0.9991989504413639]], "translation": [-0.09892673347885642, -0.0576657122735982, 0.4300952295753565]}, "IMG_0044.JPG": {"rotation_matrix": [[0.9999724909176403, 0.002005292152549492, -0.007141163165256628], [0.003842881325721777, 0.6834114832257382, 0.7300232714498306], [0.006344262848141891, -0.7300306318221034, 0.6833849771031949]], "translation": [-0.09443949512662748, -0.047857833271829485, 0.5285156222586874]}}} -------------------------------------------------------------------------------- /canon-efs-24mm-crop1.6/go.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | ../calibrate.py -c9x6 -s 0.0244 -d debug -j calib.json -S 22.3x14.9 *.JPG 3 | -------------------------------------------------------------------------------- /import_calibration_data.py: -------------------------------------------------------------------------------- 1 | import sys, json, os 2 | from math import radians 3 | import numpy 4 | import bpy 5 | from mathutils import Matrix 6 | 7 | scene = bpy.context.scene 8 | 9 | # Clear scene! 10 | bpy.ops.object.select_all(action='SELECT') 11 | bpy.ops.object.delete() 12 | 13 | try: 14 | idx = sys.argv.index('--') 15 | except ValueError: 16 | print('usage: blend -P %s -- calib.json' % sys.argv[0]) 17 | sys.exit(-1) 18 | 19 | if idx == len(sys.argv): 20 | print('usage: blend -P %s -- calib.json' % sys.argv[0]) 21 | sys.exit(-1) 22 | 23 | # Load calibration file 24 | 25 | calib_file = sys.argv[idx+1] 26 | j = json.load(open(calib_file, 'rt')) 27 | 28 | calib_dir = os.path.split(calib_file)[0] 29 | 30 | # Scene resolution 31 | 32 | W, H = j['image_resolution'] 33 | 34 | scene.render.resolution_x = W 35 | scene.render.resolution_y = H 36 | 37 | # Chessboard 38 | 39 | # Corner vertices 40 | chessboard_points = numpy.array(j['chessboard_points'], 'float32') 41 | 42 | chessboard_mesh = bpy.data.meshes.new(name='chessboard corners') 43 | chessboard_mesh.vertices.add(chessboard_points.shape[0]) 44 | chessboard_mesh.vertices.foreach_set('co', chessboard_points.flatten()) 45 | chessboard_mesh.update() 46 | #if chessboard_mesh.validate(verbose=True): 47 | # print('Mesh data did not validate!') 48 | 49 | chessboard_object = bpy.data.objects.new(name='chessboard corners', object_data=chessboard_mesh) 50 | bpy.context.scene.collection.objects.link(chessboard_object) 51 | 52 | # Textured quad 53 | 54 | spacing = j['chessboard_spacing_m'] 55 | corners = j['chessboard_inner_corners'] 56 | 57 | vertices = numpy.array([ 58 | -spacing, -spacing, 0, 59 | spacing*corners[0], -spacing, 0, 60 | spacing*corners[0], spacing*corners[1], 0, 61 | -spacing, spacing*corners[1], 0 62 | ], 'float32') 63 | indices = numpy.array([0, 1, 2, 3], 'uint32') 64 | loop_start = numpy.array([0], 'uint32') 65 | loop_total = numpy.array([4], 'uint32') 66 | uvs = numpy.array([ 67 | 0, 0, 68 | 1, 0, 69 | 1, 1, 70 | 0, 1 71 | ], 'float32') 72 | 73 | m = bpy.data.meshes.new(name='chessboard') 74 | m.vertices.add(4) 75 | m.vertices.foreach_set('co', vertices) 76 | m.loops.add(4) 77 | m.loops.foreach_set('vertex_index', indices) 78 | m.polygons.add(1) 79 | m.polygons.foreach_set('loop_start', loop_start) 80 | m.polygons.foreach_set('loop_total', loop_total) 81 | uv_layer = m.uv_layers.new(name='uvs') 82 | uv_layer.data.foreach_set('uv', uvs) 83 | m.update() 84 | if m.validate(verbose=True): 85 | print('Mesh data did not validate!') 86 | 87 | mat = bpy.data.materials.new('chessboard') 88 | mat.use_nodes = True 89 | nodes = mat.node_tree.nodes 90 | nodes.clear() 91 | texcoord = nodes.new(type='ShaderNodeTexCoord') 92 | texcoord.location = 0, 300 93 | mapping = nodes.new(type='ShaderNodeMapping') 94 | mapping.location = 200, 300 95 | mapping.inputs['Scale'].default_value = (corners[0]+1, corners[1]+1, 1) 96 | checktex = nodes.new(type='ShaderNodeTexChecker') 97 | checktex.location = 400, 300 98 | checktex.inputs['Color2'].default_value = 0, 0, 0, 1 99 | checktex.inputs['Scale'].default_value = 1.0 100 | emission = nodes.new(type='ShaderNodeEmission') 101 | emission.location = 600, 300 102 | node_output = nodes.new(type='ShaderNodeOutputMaterial') 103 | node_output.location = 800, 300 104 | links = mat.node_tree.links 105 | links.new(texcoord.outputs['UV'], mapping.inputs['Vector']) 106 | links.new(mapping.outputs['Vector'], checktex.inputs['Vector']) 107 | links.new(checktex.outputs['Color'], emission.inputs['Color']) 108 | links.new(emission.outputs['Emission'], node_output.inputs['Surface']) 109 | 110 | m.materials.append(mat) 111 | 112 | o = bpy.data.objects.new(name='chessboard', object_data=m) 113 | bpy.context.scene.collection.objects.link(o) 114 | # Hide by default 115 | o.hide_set(True) 116 | 117 | # Cameras 118 | 119 | camera_collection = bpy.data.collections['Collection'] 120 | camera_collection.name = 'Cameras' 121 | 122 | if 'sensor_size_mm' not in j: 123 | print('Warning: camera sensor size value not available, you need to set it manually!') 124 | if 'fov_degrees' not in j: 125 | print('Warning: camera FOV not available, you need to set it manually!') 126 | 127 | for img_file, values in j['chessboard_orientations'].items(): 128 | 129 | camdata = bpy.data.cameras.new(name=img_file) 130 | 131 | if 'sensor_size_mm' in j: 132 | camdata.sensor_fit = 'HORIZONTAL' 133 | camdata.sensor_width = j['sensor_size_mm'][0] 134 | 135 | if 'fov_degrees' in j: 136 | camdata.lens_unit = 'FOV' 137 | camdata.angle = radians(j['fov_degrees'][0]) 138 | 139 | M = j['camera_matrix'] 140 | fx = M[0][0] 141 | fy = M[1][1] 142 | cx = M[0][2] 143 | cy = M[1][2] 144 | 145 | pixel_aspect = fy / fx 146 | if pixel_aspect > 1: 147 | scene.render.pixel_aspect_x = 1.0 148 | scene.render.pixel_aspect_y = pixel_aspect 149 | else: 150 | scene.render.pixel_aspect_x = 1.0 / pixel_aspect 151 | scene.render.pixel_aspect_y = 1.0 152 | 153 | # Thanks to https://www.rojtberg.net/1601/from-blender-to-opencv-camera-and-back/ 154 | camdata.shift_x = -(cx / W - 0.5) 155 | camdata.shift_y = (cy - 0.5 * H) / W 156 | 157 | camobj = bpy.data.objects.new(img_file, camdata) 158 | 159 | t = values['translation'] 160 | R = Matrix(values['rotation_matrix']) 161 | 162 | # Object xform (chessboard into camera space) 163 | #camobj.matrix_world = Matrix.Translation(t) @ R.to_4x4() 164 | 165 | # Camera xform (inverse, camera into chessboard space) 166 | camobj.matrix_world = (Matrix.Rotation(radians(180), 4, 'X') @ Matrix.Translation(t) @ R.to_4x4()).inverted() 167 | 168 | # Background image 169 | basename, ext = os.path.splitext(img_file) 170 | img_file_original = os.path.join(calib_dir, img_file) 171 | if os.path.isfile(img_file_original): 172 | img = bpy.data.images.load(img_file_original) 173 | assert img is not None 174 | camdata.show_background_images = True 175 | bg = camdata.background_images.new() 176 | bg.image = img 177 | else: 178 | print("Image %s not available, can't set as background on camera" % img_file) 179 | 180 | camera_collection.objects.link(camobj) 181 | 182 | -------------------------------------------------------------------------------- /pattern.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/pattern.png -------------------------------------------------------------------------------- /ref.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # Reference code after https://docs.opencv.org/4.5.2/dc/dbb/tutorial_py_calibration.html 3 | import numpy as np 4 | import cv2 as cv 5 | import sys, glob 6 | 7 | # termination criteria 8 | criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001) 9 | 10 | CORNERS_W = 9 11 | CORNERS_H = 6 12 | 13 | # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) 14 | objp = np.zeros((CORNERS_W*CORNERS_H,3), np.float32) 15 | objp[:,:2] = np.mgrid[0:CORNERS_W,0:CORNERS_H].T.reshape(-1,2) 16 | 17 | # Arrays to store object points and image points from all the images. 18 | objpoints = [] # 3d point in real world space 19 | imgpoints = [] # 2d points in image plane. 20 | 21 | images = sys.argv[1:] 22 | 23 | for fname in images: 24 | img = cv.imread(fname) 25 | gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) 26 | # Find the chess board corners 27 | ret, corners = cv.findChessboardCorners(gray, (CORNERS_W,CORNERS_H), None) 28 | # If found, add object points, image points (after refining them) 29 | if ret: 30 | objpoints.append(objp) 31 | corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria) 32 | imgpoints.append(corners) 33 | # Draw and display the corners 34 | cv.drawChessboardCorners(img, (CORNERS_W,CORNERS_H), corners2, ret) 35 | cv.imshow('img', img) 36 | cv.waitKey(100) 37 | else: 38 | print('No chessboard found in %s' % fname) 39 | 40 | ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None) 41 | 42 | print(ret, mtx, dist, rvecs, tvecs) 43 | 44 | # Undistort one of the images 45 | 46 | img = cv.imread(images[0]) 47 | h, w = img.shape[:2] 48 | newcameramtx, roi = cv.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h)) 49 | 50 | dst = cv.undistort(img, mtx, dist, None, newcameramtx) 51 | # crop the image 52 | x, y, w, h = roi 53 | dst = dst[y:y+h, x:x+w] 54 | cv.imwrite('calibresult.png', dst) 55 | 56 | # Compute reprojection error 57 | 58 | mean_error = 0 59 | for i in range(len(objpoints)): 60 | imgpoints2, _ = cv.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist) 61 | print(i) 62 | print(imgpoints2) 63 | print(imgpoints[i]) 64 | doh 65 | error = cv.norm(imgpoints[i], imgpoints2, cv.NORM_L2)/len(imgpoints2) 66 | mean_error += error 67 | print( "total error: {}".format(mean_error/len(objpoints)) ) 68 | 69 | cv.destroyAllWindows() -------------------------------------------------------------------------------- /sample/frame-0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-0.png -------------------------------------------------------------------------------- /sample/frame-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-1.png -------------------------------------------------------------------------------- /sample/frame-10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-10.png -------------------------------------------------------------------------------- /sample/frame-11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-11.png -------------------------------------------------------------------------------- /sample/frame-12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-12.png -------------------------------------------------------------------------------- /sample/frame-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-2.png -------------------------------------------------------------------------------- /sample/frame-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-3.png -------------------------------------------------------------------------------- /sample/frame-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-4.png -------------------------------------------------------------------------------- /sample/frame-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-5.png -------------------------------------------------------------------------------- /sample/frame-6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-6.png -------------------------------------------------------------------------------- /sample/frame-7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-7.png -------------------------------------------------------------------------------- /sample/frame-8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-8.png -------------------------------------------------------------------------------- /sample/frame-9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulmelis/opencv-camera-calibration/a910fcf71c9627cae5c6ca9cc9867c7a48763aba/sample/frame-9.png -------------------------------------------------------------------------------- /sample/readme.txt: -------------------------------------------------------------------------------- 1 | From https://learnopencv.com/camera-calibration-using-opencv/, split gif into images 2 | --------------------------------------------------------------------------------