├── .pylintrc
├── LICENSE
├── README.md
├── calibrate_camera.py
├── car.mp4
├── get_projection_maps.py
├── paramsettings.png
├── paramsettings.py
├── readme.txt
├── show_undistorted_cameras.py
├── show_undistorted_image.py
├── stitch_images.py
├── stitch_images_using_weightFile.py
├── surroundview.py
├── testPythonCmd.py
└── yaml
├── back.png
├── back.yaml
├── bad.png
├── car.png
├── choose_back.png
├── choose_front.png
├── choose_left.png
├── choose_right.png
├── corners.png
├── detect_lines.png
├── front.png
├── front.yaml
├── front_proj.png
├── left.png
├── left.yaml
├── left_proj.png
├── mask.png
├── mask_after_dilate.png
├── mask_before_dilate.png
├── original.png
├── overlap.png
├── overlap_gray.png
├── result.png
├── right.png
├── right.yaml
├── weight_for_A.png
└── weights.yaml
/.pylintrc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/.pylintrc
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2020 Zhao Liang
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: "一份简单实用的车辆环视鸟瞰图拼接实现"
3 | tags: [OpenCV, Python]
4 | categories: [无人驾驶]
5 | date: 2019-10-05
6 | url: "vehicle-surround-view-system"
7 | ---
8 |
9 | 这是一篇科普文,目的是介绍车辆环视鸟瞰图的原理,并给出一份具体的 Python 实现。本文内容比较初等,只需要读者了解相机的标定、透视变换等知识,并懂得如何使用 OpenCV。
10 |
11 | 本文使用的全部代码和素材都放在 [github](https://github.com/neozhaoliang/surround-view-system-introduction) 上。
12 |
13 |
14 |
15 |
16 | 首先上效果视频:
17 |
18 |
19 |
20 |
21 | 这个视频演示中使用的是一辆无人小车,车上搭载了四个环视鱼眼摄像头,相机拍摄到的画面首先经过畸变校正,然后在射影变换下转换为对地面的鸟瞰图,最后拼接起来经过平滑处理后得到了上面的效果。代码是纯 Python 实现的,运行在一台 AGX Xavier 上。相机传回的画面分辨率为 640x480,完全在 CPU 中进行处理,视频效果是实时的,非常流畅。
22 |
23 |
24 | 下面就来一一介绍我的实现步骤。这个实现很粗糙,仅作为一个演示项目展示全景鸟瞰图的生成流程,大家领会精神即可。但是运行效率和实时效果是完全可以媲美市面上的商用车搭载的产品的 (至少在 demo 里如此)。我在安装了环视系统的乘用车上也测试过这个程序,效果是一致的。
25 |
26 |
27 | # 硬件和软件配置
28 |
29 | 介绍一下我们这个项目使用的硬件配置:
30 |
31 | 1. 四个 USB 鱼眼相机,支持的分辨率为 640x480|800x600|1920x1080 三种。我这里因为是需要在 Python 下面实时运行,为了效率考虑设置的分辨率是 640x480。
32 |
33 | 2. 一台小车,这个无需多言。
34 |
35 | 3. 一台 AGX Xavier。实际上这个程序在普通笔记本上跑一样溜得飞起。
36 |
37 | 软件配置如下:
38 |
39 | 1. 操作系统 Ubuntu 16.04.
40 | 2. Python>=3.
41 | 3. OpenCV2.7 (这个项目要用到 OpenCV 的线段检测,此功能在 OpenCV>=3 的某些版本中不可用).
42 |
43 | 四个相机分别安装在车辆四周,通过 USB 与 AGX 连接。
44 |
45 | > **注意**:为了方便起见,在本文以及演示代码中,四个相机分别用 `front`、`back`、`left`、`right` 来指代,并假定其对应的 USB 设备号分别为 0, 1, 2, 3。实际开发中,每个摄像头分配的设备号与上电顺序有关,请针对具体情况进行修改。如果是 gmsl 摄像头则无须担心。
46 |
47 |
48 | # 准备工作:获得原始图像与相机内参
49 |
50 | 首先我们需要获取每个相机的内参矩阵与畸变系数。我在项目中附上了一个脚本 [calibrate_camera.py](https://github.com/neozhaoliang/surround-view-system-introduction/blob/master/calibrate_camera.py),你只需要运行这个脚本,通过命令行参数告诉它相机设备号,是否是鱼眼相机,以及标定板的网格大小,然后手举标定板在相机面前摆几个姿势即可。标定的结果会保存在 `yaml` 子目录的一个 `yaml` 文件中。我们约定 `K` 表示相机的 3x3 内参矩阵,`D` 表示相机的四个畸变系数。
51 |
52 | 以下是视频中四个相机分别拍摄的原始画面,顺序依次为前、后、左、右,并命名为 `front.png`、`back.png`、`left.png`、`right.png`。
53 |
54 |
55 |
56 |
57 |
58 |
59 | 四个相机的内参文件分别为 `front.yaml`、`back.yaml`、`left.yaml`、`right.yaml`,这些图像和内参文件都存放在项目的 [yaml](https://github.com/neozhaoliang/surround-view-system-introduction/tree/master/yaml) 子目录下。
60 |
61 | 项目中还附上了辅助一个脚本 [show_undistorted_cameras.py](https://github.com/neozhaoliang/surround-view-system-introduction/blob/master/show_undistorted_cameras.py),可以同时显示四个环视相机校正后的画面及其对应的设备号,便于查看校正效果以及相机与设备号的对应关系。
62 |
63 |
64 | # 设置投影范围和参数
65 |
66 |
67 | 接下来我们需要获取每个相机到地面的投影矩阵,这个投影矩阵会把相机校正后的画面转换为对地面上某个矩形区域的鸟瞰图。这四个相机的投影矩阵不是独立的,它们必须保证投影后的区域能够正好拼起来。
68 |
69 | 这一步是通过联合标定实现的。即在车的四周地面上摆放标定板,拍摄图像,手动选取对应点,然后获取投影矩阵。
70 |
71 | 请看下图:
72 |
73 |
74 |
75 | 首先在车身的四角摆放四个标定板,标定板的图案大小并无特殊要求,只要尺寸一致,能在图像中清晰看到即可。每个标定板应当恰好位于相邻的两个相机视野的重合区域中。
76 |
77 | 在上面拍摄的相机画面中车的四周铺了一圈标定板,这个当时是主要为了保证摆放整齐,实际上四块就够了。
78 |
79 | 然后我们需要设置几个参数:(以下所有参数均以厘米为单位)
80 |
81 | + `chessboardSize`:标定板的尺寸。我们这里用的是 80x80 的泡沫板。
82 | + `innerShiftWidth`, `innerShiftHeight`:标定板内侧边缘与车辆左右两侧的距离,标定板内侧边缘与车辆前后方的距离。
83 | + `carWidth`, `carHeight`:标定板的左右间隔,标定板的前后间隔。这个尺寸等于车的宽度/高度加上二倍的 `innerShiftWidth` / `innerShiftHeight`。
84 | + `shiftWidth`, `shiftHeight`:这两个参数主要决定了在鸟瞰图中,向标定板的外侧看多远。这两个值越大,鸟瞰图看的范围就越大,相应地远处的物体被投影后的形变也越严重,所以应酌情选择。
85 |
86 | 于是鸟瞰图的总宽为
87 |
88 | ```
89 | totalWidth = carWidth + 2 * chessboardSize + 2 * shiftWidth
90 | ```
91 | 总高为
92 | ```
93 | totalHeight = carHeight + 2 * chessboardSize` + 2 * shiftHeight
94 | ```
95 |
96 | 我在图中还标注了车辆所在矩形区域的四角 (红色圆点),这四个角点的坐标分别为 $(x_1,y_1), (x_2,y_1), (x_1,y_2),(x_2,y_2)$。这个矩形区域相机是看不到的,我们会用一张车辆的 logo 图像来覆盖此处。
97 |
98 | 注意这个车辆区域四边的延长线将整个鸟瞰图分为前左 (FL)、前中 (F)、前右 (FR)、左 (L)、右 (R)、后左 (BL)、后中 (B)、后右 (BR) 八个部分,其中 FL、FR、BL、BR 是相邻相机视野的重合区域,也是我们重点需要进行融合处理的部分。F、R、L、R 四个区域属于每个相机单独的视野,不需要进行融合处理。
99 |
100 | 以上参数存放在 [paramsettings.py](https://github.com/neozhaoliang/surround-view-system-introduction/blob/master/paramsettings.py) 中。
101 |
102 | 设置好参数以后,每个相机的投影区域也就确定了,比如前方相机对应的投影区域如下:
103 |
104 |
105 |
106 | 接下来我们需要通过手动选取标志点来获取到地面的投影矩阵。
107 |
108 |
109 | # 手动标定获取投影矩阵
110 |
111 | 首先运行项目中 [get_projection_maps.py](https://github.com/neozhaoliang/surround-view-system-introduction/blob/master/get_projection_maps.py) 这个脚本,这个脚本会首先显示相机图像校正后的画面:
112 |
113 |
114 |
115 | > **注意**:默认的 OpenCV 的校正函数会将鱼眼相机校正后的画面进行裁剪,返回一个 OpenCV "认为" 合适的区域,但是这可能会导致我们拍摄的标定板的某些部分被裁剪掉。幸运的是 [cv2.fisheye.initUndistortRectifyMap](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#ga0d37b45f780b32f63ed19c21aa9fd333) 这个函数允许我们再传入一个新的内参矩阵,对校正后的画面再作一次放缩和平移。你可以对每个相机选择不同的横向、纵向压缩比。
116 | >
117 | > 在这个项目中,我没有采用这一方法,但代码中预留了 `scale_x`, `scale_y` 作为接口,你可以修改这两个值来调整校正后图像显示的范围。
118 |
119 | 然后依次点击事先确定好的四个标志点 (顺序不能错!),得到的效果如下:
120 |
121 |
122 |
123 | 这四个标志点是可以自由设置的,但是你需要在程序中手动修改它们在鸟瞰图中的像素坐标。当你在校正图中点击这四个点时,OpenCV 会根据它们在校正图中的像素坐标和在鸟瞰图中的像素坐标的对应关系计算一个射影矩阵。这里用到的原理就是四点对应确定一个射影变换。
124 |
125 | 如果你不小心点歪了的话可以按 `d` 键删除上一个错误的点。选择好以后点回车,就会显示投影后的效果图:
126 |
127 |
128 |
129 | 再点击 `q` 键退出,就会将投影矩阵写入 `front.yaml` 中,这个矩阵的名字为 `M`。
130 |
131 | 有黑色区域看不到?没关系,左右两边相机的鸟瞰图会把它们补上的。
132 |
133 | 再比如左侧相机的标定如下图所示:
134 |
135 |
136 |
137 | 对应的投影图为
138 |
139 |
140 |
141 | 对四个相机分别采用此操作,我们就得到了四个相机的鸟瞰图,以及对应的四个投影矩阵。下一步我们的任务是把这四个鸟瞰图拼起来。
142 |
143 | # 鸟瞰图的拼接与平滑
144 |
145 | 如果你前面的操作一切正常的话,运行 [stitch_images.py](https://github.com/neozhaoliang/surround-view-system-introduction/blob/master/stitch_images.py) 后应该会显示如下的拼接图:
146 |
147 |
148 |
149 | 它是首先读取了四个相机的原始图像和四个 `yaml` 文件,将画面校正后投影到地面,然后拼接在一起并进行平滑处理得到的。
150 |
151 | 我来逐步介绍它是怎么做到的:
152 |
153 | 1. 由于相邻相机之间有重叠的区域,所以这部分的融合是关键。如果直接采取两幅图像加权平均的方式融合的话你会得到类似下面的结果:
154 |
155 |
156 |
157 | 这里的关键在于权重系数应该是随像素变化而变化的,并且是随着像素连续变化。
158 |
159 | 2. 以左上角区域为例,这个区域是 `front`, `left` 两个相机视野的重叠区域。我们首先将投影图中的重叠部分取出来:
160 |
161 |
162 |
163 | 灰度化:
164 |
165 |
166 |
167 | 二值化:
168 |
169 |
170 |
171 | 注意这里面有噪点,可以用形态学操作去掉:
172 |
173 |
174 |
175 | 至此我们就得到了重叠区域的一个完整 mask。
176 |
177 | 3. 将重叠区域的边界上的两条线段检测出来:(只取最长的这两条,在投影标定时必须调整 `scale_x|scale_y` 使得重叠区域的边界有如下的形状,这是这个方法的一个限制)
178 |
179 |
180 |
181 | 这里用到了 OpenCV 的 `cv2.createLineSegmentDetector`,此功能在 OpenCV>=3 中被移除了。
182 |
183 | 4. 对重叠区域中的每个像素,计算其到这两个线段的距离 $d_1, d_2$,则该像素对应的权值为 $w=d_1^2/(d_1^2+d_2^2)$ 和 $1-w$。
184 |
185 | 5. 对不在重叠区域内的像素,若其属于 `front` 相机的范围则其权值为 1,否则权值为 0。于是我们得到了一个连续变化的,取值范围在 0~1 之间的矩阵 $G$,其灰度图如下:
186 |
187 |
188 |
189 | 用 $G$ 作为权值融合即可得到平滑的鸟瞰图。
190 |
191 | 6. 最后还有重要的一步:由于不同相机的曝光度不同,导致不同的区域会出现明暗的亮度差,影响美观。我们需要调整每个区域的亮度,使得整个拼接图像的亮度趋于一致。这一步可以通过给拼接图像的 RGB 通道分别乘以一个常数来实现,这三个常数是这样计算的:对四个重叠区域 FL、FR、BL、BR,分别计算两个相邻相机 $A, B$ 在此区域的 RGB 通道平均值 $(A_r,A_g,A_b)$ 和 $(B_r, B_g,B_b)$,然后分别求比值得到三个比 $(A_r/B_r,A_g/B_g, A_b/B_b)$。对四个重叠区域我们可以得到四组比值:
192 |
193 | 1. 右/前: $(a_1,a_2,a_3)$
194 | 2. 后/右: $(b_1,b_2,b_3)$
195 | 3. 左/后: $(c_1,c_2,c_3)$
196 | 4. 前/左: $(d_1,d_2,d_3)$
197 |
198 | 定义 $$\begin{align*}e_1&=(a_1 + b_1 + c_1 + d_1) / (a_1^2 + b_1^2 + c_1^2 + d_1^2),\\
199 | e_2 &= (a_2 + b_2 + c_2 + d_2) / (a_2^2 + b_2^2 + c_2^2 + d_2^2),\\
200 | e_3 &= (a_3 + b_3 + c_3 + d_3) / (a_3^2 + b_3^2 + c_3^2 + d_3^2).\end{align*}$$
201 |
202 | 分别乘在拼接图的 RGB 三个通道上即可。
203 |
204 |
205 | 这个调整亮度差的方法我是从一篇文章上看到的,但是出处不记得了。
206 |
207 |
208 | # 具体实现的注意事项
209 |
210 | 本文介绍的只是一个类似 "课程作业" 性质的实现,距离正式成为商业产品还有不少工作要做:
211 |
212 | 1. 标定流程应当界面化、自动化。
213 | 2. 重叠区域的融合算法需要改进,这里用的是一个 "土味" 很重的方法。
214 | 3. 亮度调整的方法也有值得改进之处。当车辆两侧环境一边黑暗一边光明时仍然会出现亮度差。
215 | 4. 最终我们还需要实现一个 lookup table,中间这些校正、投影、拼接流程只在标定时用一次即可,运行时完全可以通过 lookup table 和权重矩阵直接获得鸟瞰图。
216 |
217 | 至于其它的如图像同步、缓冲设计、多线程就不说了。
218 |
219 | # 实车运行
220 |
221 | 你可以在实车上运行 [surroundview.py](https://github.com/neozhaoliang/surround-view-system-introduction/blob/master/surroundview.py) 来验证最终的效果。
222 |
223 |
224 | # 附录:项目各脚本一览
225 |
226 | 项目中目前的脚本根据执行顺序排列如下:
227 |
228 | 1. `calibrate_camera.py`:用于相机内参标定。
229 | 2. `show_undistorted_cameras.py`:用于显示校正后的画面和查看设备号与相机对应关系。
230 | 3. `paramsettings.py`:用于设置投影区域的各参数。
231 | 4. `get_projection_maps.py`:用于手动标定获取到地面的投影矩阵。
232 | 5. `stitch_images.py`:用于显示静态的鸟瞰图拼接效果。
233 | 6. `surroundview.py`:用于在实车上运行的最终版本。
234 |
--------------------------------------------------------------------------------
/calibrate_camera.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | """
3 | ~~~~~~~~~~~~~~~~~~
4 | Camera calibration
5 | ~~~~~~~~~~~~~~~~~~
6 |
7 | Usage:
8 | python calib.py \
9 | -i /dev/video0 \
10 | -grid 9x6 \
11 | -out fisheye.yaml \
12 | -framestep 20 \
13 | -resolution 640x480
14 | --fisheye
15 | """
16 | import argparse
17 | import os
18 | import numpy as np
19 | import yaml
20 | import cv2
21 |
22 |
23 | def main():
24 | parser = argparse.ArgumentParser()
25 | parser.add_argument("-i", "--input", default=0, help="input video file or camera device")
26 | parser.add_argument("-grid", "--grid", default="20x20", help="size of the grid (rows x cols)")
27 | parser.add_argument("-framestep", type=int, default=20, help="use every nth frame in the video")
28 | parser.add_argument("-o", "--output", default="./yaml",
29 | help="path to output yaml file")
30 | parser.add_argument("-resolution", "--resolution", default="640x480",
31 | help="resolution of the camera")
32 | parser.add_argument("-fisheye", "--fisheye", action="store_true",
33 | help="set ture if this is a fisheye camera")
34 |
35 | args = parser.parse_args()
36 |
37 | if not os.path.exists(args.output):
38 | os.mkdir(args.output)
39 |
40 | try:
41 | source = cv2.VideoCapture(int(args.input))
42 | except:
43 | source = cv2.VideoCapture(args.input)
44 |
45 | W, H = [int(x) for x in args.resolution.split("x")]
46 | source.set(3, W)
47 | source.set(4, H)
48 |
49 | grid_size = tuple(int(x) for x in args.grid.split("x"))
50 | grid_points = np.zeros((np.prod(grid_size), 3), np.float32)
51 | grid_points[:, :2] = np.indices(grid_size).T.reshape(-1, 2)
52 |
53 | objpoints = [] # 3d point in real world space
54 | imgpoints = [] # 2d points in image plane
55 |
56 | quit = False
57 | do_calib = False
58 | i = -1
59 | while True:
60 | i += 1
61 | retcode, img = source.read()
62 | if not retcode:
63 | raise ValueError("cannot read frame from video")
64 | if i % args.framestep != 0:
65 | continue
66 |
67 | print("searching for chessboard corners in frame " + str(i) + "...")
68 | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
69 | found, corners = cv2.findChessboardCorners(
70 | gray,
71 | grid_size,
72 | cv2.CALIB_CB_ADAPTIVE_THRESH +
73 | cv2.CALIB_CB_NORMALIZE_IMAGE +
74 | cv2.CALIB_CB_FILTER_QUADS
75 | )
76 | if found:
77 | term = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.01)
78 | cv2.cornerSubPix(gray, corners, (5, 5), (-1, -1), term)
79 | print("OK")
80 | imgpoints.append(corners.reshape(1, -1, 2))
81 | objpoints.append(grid_points.reshape(1, -1, 3))
82 | cv2.drawChessboardCorners(img, grid_size, corners, found)
83 | text1 = "press c to calibrate"
84 | text2 = "press q to quit"
85 | text3 = "device: {}".format(args.input)
86 | fontscale = 0.6
87 | cv2.putText(img, text1, (20, 70), cv2.FONT_HERSHEY_SIMPLEX, fontscale, (255, 200, 0), 2)
88 | cv2.putText(img, text2, (20, 110), cv2.FONT_HERSHEY_SIMPLEX, fontscale, (255, 200, 0), 2)
89 | cv2.putText(img, text3, (20, 30), cv2.FONT_HERSHEY_SIMPLEX, fontscale, (255, 200, 0), 2)
90 | cv2.imshow("corners", img)
91 | key = cv2.waitKey(1) & 0xFF
92 | if key == ord("c"):
93 | do_calib = True
94 | break
95 |
96 | elif key == ord("q"):
97 | quit = True
98 | break
99 |
100 | if quit:
101 | source.release()
102 | cv2.destroyAllWindows()
103 |
104 | if do_calib:
105 | print("\nPerforming calibration...\n")
106 | N_OK = len(objpoints)
107 | if N_OK < 12:
108 | print("Less than 12 corners detected, calibration failed")
109 | return
110 |
111 | K = np.zeros((3, 3))
112 | D = np.zeros((4, 1))
113 | rvecs = [np.zeros((1, 1, 3), dtype=np.float64) for _ in range(N_OK)]
114 | tvecs = [np.zeros((1, 1, 3), dtype=np.float64) for _ in range(N_OK)]
115 | calibration_flags = (cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC +
116 | cv2.fisheye.CALIB_CHECK_COND +
117 | cv2.fisheye.CALIB_FIX_SKEW)
118 |
119 | # 求出内参矩阵和畸变系数
120 | if args.fisheye:
121 | ret, mtx, dist, rvecs, tvecs = cv2.fisheye.calibrate(
122 | objpoints,
123 | imgpoints,
124 | (W, H),
125 | K,
126 | D,
127 | rvecs,
128 | tvecs,
129 | calibration_flags,
130 | (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 1e-6)
131 | )
132 | else:
133 | ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
134 | objpoints,
135 | imgpoints,
136 | (W, H),
137 | None,
138 | None)
139 | if ret:
140 | print(ret)
141 | data = {"dim": np.array([W, H]).tolist(), "K": K.tolist(), "D": D.tolist()}
142 | fname = os.path.join(args.output, "camera" + str(args.input) + ".yaml")
143 | print(fname)
144 | with open(fname, "w") as f:
145 | yaml.safe_dump(data, f)
146 | print("succesfully saved camera data")
147 |
148 | cv2.putText(img, "Success!", (220 , 240), cv2.FONT_HERSHEY_COMPLEX, 2, (0, 0, 255), 2)
149 | else:
150 | cv2.putText(img, "Failed!", (220, 240), cv2.FONT_HERSHEY_COMPLEX, 2, (0, 0, 255), 2)
151 |
152 | cv2.imshow("corners", img)
153 | cv2.waitKey(0)
154 |
155 |
156 | if __name__ == "__main__":
157 | main()
158 |
--------------------------------------------------------------------------------
/car.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/car.mp4
--------------------------------------------------------------------------------
/get_projection_maps.py:
--------------------------------------------------------------------------------
1 | """
2 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 | Manually select points to get the projection map
4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 | """
6 | import cv2
7 | import numpy as np
8 | import yaml
9 | from paramsettings import *
10 |
11 |
12 | # ----------------------------------------
13 | # global parameters
14 | # name = "front"
15 | # name = "back" # added by Holy 2006061032
16 | # name = "left" # added by Holy 2006061032
17 | name = "right" # added by Holy 2006061032
18 | image_file = "./yaml/{}.png".format(name)
19 | camera_file = "./yaml/{}.yaml".format(name)
20 | output = "./yaml/{}_projMat.yaml".format(name)
21 | # horizontal and vertical scaling factors
22 | scale_x = 1.0
23 | scale_y = 1.0
24 | W, H = 640, 480
25 | colors = [(0, 0, 255), (0, 255, 0), (255, 0, 0), (0, 255, 255)]
26 | corners = []
27 | calibDist = 50
28 | # -----------------------------------------
29 |
30 | shapes = {
31 | "front": frontShape[::-1],
32 | "back": frontShape[::-1],
33 | "left": leftShape[::-1],
34 | "right": leftShape[::-1]
35 | }
36 |
37 | dstF = np.float32([
38 | [shiftWidth + chessboardSize, shiftHeight],
39 | [shiftWidth + chessboardSize + carWidth, shiftHeight],
40 | [shiftWidth + chessboardSize, shiftHeight + calibDist],
41 | [shiftWidth + chessboardSize + carWidth, shiftHeight + calibDist]])
42 |
43 | dstL = np.float32([
44 | [shiftHeight + chessboardSize, shiftWidth],
45 | [shiftHeight + chessboardSize + carHeight, shiftWidth],
46 | [shiftHeight + chessboardSize, shiftWidth + calibDist],
47 | [shiftHeight + chessboardSize + carHeight, shiftWidth + calibDist]])
48 |
49 | dsts = {"front": dstF, "back": dstF, "left": dstL, "right": dstL}
50 |
51 |
52 | def click(event, x, y, flags, param):
53 | image, = param
54 | if event == cv2.EVENT_LBUTTONDOWN:
55 | print(x, y)
56 | corners.append((x, y))
57 | draw_image(image)
58 |
59 |
60 | def draw_image(image):
61 | new_image = image.copy()
62 | for i, point in enumerate(corners):
63 | cv2.circle(new_image, point, 6, colors[i % 4], -1)
64 |
65 | if len(corners) > 2:
66 | pts = np.int32(corners).reshape(-1, 2)
67 | hull = cv2.convexHull(pts)
68 | mask = np.zeros(image.shape[:2], np.uint8)
69 | cv2.fillConvexPoly(mask, hull, color=1, lineType=8, shift=0)
70 | temp = np.zeros_like(image, np.uint8)
71 | temp[:, :] = [0, 0, 255]
72 | imB = cv2.bitwise_and(temp, temp, mask=mask)
73 | cv2.addWeighted(new_image, 1.0, imB, 0.5, 0.0, new_image)
74 |
75 | cv2.imshow("original image", new_image)
76 |
77 |
78 | def main():
79 | image = cv2.imread(image_file)
80 | with open(camera_file, "r") as f:
81 | data = yaml.load(f, Loader=yaml.SafeLoader)
82 |
83 | K = np.array(data["K"])
84 | D = np.array(data["D"])
85 | new_K = K.copy()
86 | new_K[0, 0] *= scale_x
87 | new_K[1, 1] *= scale_y
88 | map1, map2 = cv2.fisheye.initUndistortRectifyMap(
89 | K,
90 | D,
91 | np.eye(3),
92 | new_K,
93 | (W, H),
94 | cv2.CV_16SC2
95 | )
96 | image = cv2.remap(
97 | image,
98 | map1,
99 | map2,
100 | interpolation=cv2.INTER_LINEAR,
101 | borderMode=cv2.BORDER_CONSTANT
102 | )
103 | cv2.namedWindow("original image")
104 | cv2.setMouseCallback("original image", click, param=[image])
105 | cv2.imshow("original image", image)
106 | while True:
107 | key = cv2.waitKey(1) & 0xFF
108 | if key == ord("q"):
109 | return
110 | elif key == ord("d"):
111 | if len(corners) > 0:
112 | corners.pop()
113 | draw_image(image)
114 | elif key == 13:
115 | break
116 |
117 | if len(corners) != 4:
118 | print("exactly 4 corners are required")
119 | return
120 |
121 | src = np.float32(corners)
122 | dst = dsts[name]
123 | M = cv2.getPerspectiveTransform(src, dst)
124 | warped = cv2.warpPerspective(image, M, shapes[name][1::-1])
125 | cv2.imshow("warped", warped)
126 | cv2.waitKey(0)
127 | cv2.imwrite(name + "_proj.png", warped)
128 | print("saving projection matrix to file ...")
129 | with open(camera_file, "a+") as f:
130 | yaml.safe_dump({"M": M.tolist(), "scale": [scale_x, scale_y]}, f)
131 |
132 |
133 | if __name__ == "__main__":
134 | main()
135 |
--------------------------------------------------------------------------------
/paramsettings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/paramsettings.png
--------------------------------------------------------------------------------
/paramsettings.py:
--------------------------------------------------------------------------------
1 | carWidth = 160
2 | carHeight = 240
3 | chessboardSize = 80
4 | shiftWidth = 120
5 | shiftHeight = 120
6 | innerShiftWidth = 0
7 | innerShiftHeight = 0
8 | totalWidth = carWidth + 2 * chessboardSize + 2 * shiftWidth
9 | totalHeight = carHeight + 2 * chessboardSize + 2 * shiftHeight
10 |
11 | x1 = shiftWidth + chessboardSize + innerShiftWidth
12 | x2 = totalWidth - x1
13 | y1 = shiftHeight + chessboardSize + innerShiftHeight
14 | y2 = totalHeight - y1
15 |
16 | frontShape = (totalWidth, y1)
17 | leftShape = (totalHeight, x1)
18 |
--------------------------------------------------------------------------------
/readme.txt:
--------------------------------------------------------------------------------
1 | solve problem: PyLint not recognizing cv2 members
2 | 1. C:\Users\James\AppData\Roaming\Python\Python38\Scripts\pylint.exe --generate-rcfile > .pylintrc
3 | 2. At the beginning of the generated .pylintrc file you will see
4 |
5 | # A comma-separated list of package or module names from where C extensions may
6 | # be loaded. Extensions are loading into the active Python interpreter and may
7 | # run arbitrary code.
8 | extension-pkg-whitelist=
9 | Add cv2 so you end up with
10 |
11 | # A comma-separated list of package or module names from where C extensions may
12 | # be loaded. Extensions are loading into the active Python interpreter and may
13 | # run arbitrary code.
14 | extension-pkg-whitelist=cv2
15 | Save the file. The lint errors should disappear.
--------------------------------------------------------------------------------
/show_undistorted_cameras.py:
--------------------------------------------------------------------------------
1 | """
2 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 | Show all four cameras and their device numbers
4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 | """
6 | import cv2
7 | import numpy as np
8 | import yaml
9 | from imutils.video import VideoStream
10 |
11 |
12 | # usb devices of the four cameras
13 | camera_devices = [0, 1, 2, 3]
14 | # intrinsic parameter files of the cameras
15 | camera_files = ["./yaml/front.yaml",
16 | "./yaml/back.yaml",
17 | "./yaml/left.yaml",
18 | "./yaml/right.yaml"]
19 | # resolution of the video stream
20 | W, H = 640, 480
21 |
22 |
23 | def main():
24 | print("[INFO] Preparing camera devices ...")
25 | captures = [VideoStream(src=k, resolution=(W, H)).start() for k in camera_devices]
26 |
27 | print("[INFO] loading camera intrinsic files ...")
28 | undistort_maps = []
29 | for conf in camera_files:
30 | with open(conf, "r") as f:
31 | data = yaml.load(f)
32 |
33 | K = np.array(data["K"])
34 | D = np.array(data["D"])
35 | map1, map2 = cv2.fisheye.initUndistortRectifyMap(
36 | K,
37 | D,
38 | np.eye(3),
39 | K,
40 | (W, H),
41 | cv2.CV_16SC2
42 | )
43 | undistort_maps.append((map1, map2))
44 |
45 | # put all frames in one window
46 | corrected = np.zeros((2*H, 2*W, 3), dtype=np.uint8)
47 | region = {0: corrected[:H, :W],
48 | 1: corrected[:H, W:],
49 | 2: corrected[H:, :W],
50 | 3: corrected[H:, W:]}
51 |
52 | while True:
53 | print("[INFO] processing camera data ...")
54 | frames = []
55 | for device, cap, (map1, map2) in zip(camera_devices, captures, undistort_maps):
56 | frame = cap.read()
57 | frame = cv2.remap(frame, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
58 | cv2.putText(frame, "device: {}".format(device), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 2, (0, 0, 255), 2)
59 | frames.append(frame)
60 |
61 | for i, frame in enumerate(frames):
62 | region[i][...] = frame
63 |
64 | cv2.imshow("corrected", corrected)
65 |
66 | key = cv2.waitKey(1) & 0xFF
67 | if key == ord("q"):
68 | break
69 |
70 | for cap in captures:
71 | cap.stop()
72 |
73 |
74 | if __name__ == "__main__":
75 | main()
76 |
--------------------------------------------------------------------------------
/show_undistorted_image.py:
--------------------------------------------------------------------------------
1 | """
2 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 | Show all four cameras and their device numbers
4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 | """
6 | import cv2
7 | import numpy as np
8 | import yaml
9 | from imutils.video import VideoStream
10 |
11 |
12 | # usb devices of the four cameras
13 | camera_devices = [0, 1, 2, 3]
14 | # intrinsic parameter files of the cameras
15 | camera_files = ["./yaml/front.yaml",
16 | "./yaml/back.yaml",
17 | "./yaml/left.yaml",
18 | "./yaml/right.yaml"]
19 |
20 | # added by Holy 2006050822
21 | image_files = ["./yaml/front.png",
22 | "./yaml/back.png",
23 | "./yaml/left.png",
24 | "./yaml/right.png"]
25 | # end of addition 2006050822
26 |
27 | # resolution of the video stream
28 | W, H = 640, 480
29 |
30 |
31 | def main():
32 | print("[INFO] Preparing camera devices ...")
33 | # captures = [VideoStream(src=k, resolution=(W, H)).start() for k in camera_devices]
34 |
35 | print("[INFO] loading camera intrinsic files ...")
36 | undistort_maps = []
37 | for conf in camera_files:
38 | with open(conf, "r") as f:
39 | data = yaml.load(f)
40 |
41 | K = np.array(data["K"])
42 | D = np.array(data["D"])
43 | map1, map2 = cv2.fisheye.initUndistortRectifyMap(
44 | K,
45 | D,
46 | np.eye(3),
47 | K,
48 | (W, H),
49 | cv2.CV_16SC2
50 | )
51 | undistort_maps.append((map1, map2))
52 |
53 | # put all frames in one window
54 | corrected = np.zeros((2*H, 2*W, 3), dtype=np.uint8)
55 | region = {0: corrected[:H, :W],
56 | 1: corrected[:H, W:],
57 | 2: corrected[H:, :W],
58 | 3: corrected[H:, W:]}
59 |
60 | while True:
61 | print("[INFO] processing camera data ...")
62 | frames = []
63 | # added by Holy 2006050822
64 | for device, imgPathName, (map1, map2) in zip(camera_devices, image_files, undistort_maps):
65 | frame = cv2.imread(imgPathName, cv2.IMREAD_COLOR)
66 | frame = cv2.remap(frame, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
67 | cv2.putText(frame, "device: {}".format(device), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 2, (0, 0, 255), 2)
68 | frames.append(frame)
69 | # end of addition 2006050822
70 | # for device, cap, (map1, map2) in zip(camera_devices, captures, undistort_maps):
71 | # frame = cap.read()
72 | # frame = cv2.remap(frame, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
73 | # cv2.putText(frame, "device: {}".format(device), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 2, (0, 0, 255), 2)
74 | # frames.append(frame)
75 |
76 | for i, frame in enumerate(frames):
77 | region[i][...] = frame
78 |
79 | cv2.imshow("corrected", corrected)
80 |
81 | key = cv2.waitKey(1) & 0xFF
82 | if key == ord("q"):
83 | break
84 |
85 | # for cap in captures:
86 | # cap.stop()
87 |
88 |
89 | if __name__ == "__main__":
90 | main()
91 |
--------------------------------------------------------------------------------
/stitch_images.py:
--------------------------------------------------------------------------------
1 | """
2 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 | Stitch four camera images to form the 360 surround view
4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 | """
6 | import numpy as np
7 | import cv2
8 | import yaml
9 | import os
10 | from paramsettings import *
11 |
12 |
13 | W, H = 640, 480
14 | work_dir = "./yaml"
15 | car_image = os.path.join(work_dir, "car.png")
16 | camera_params = [os.path.join(work_dir, f) for f in ("front.yaml", "back.yaml", "left.yaml", "right.yaml")]
17 | camera_images = [os.path.join(work_dir, f) for f in ("front.png", "back.png", "left.png", "right.png")]
18 |
19 |
20 | def dline(x, y, x1, y1, x2, y2):
21 | """Compute a pixel (x, y) to line segment (x1, y1) and (x2, y2).
22 | """
23 | vx, vy = x2 - x1, y2 - y1
24 | dx, dy = x - x1, y - y1
25 | a = np.sqrt(vx * vx + vy * vy)
26 | return abs(vx * dy - dx * vy) / a
27 |
28 |
29 | def linelen(line):
30 | """Compute the length of a line segment.
31 | """
32 | x1, y1, x2, y2 = line[0]
33 | dx, dy = x1 - x2, y1 - y2
34 | return np.sqrt(dx*dx + dy*dy)
35 |
36 |
37 | def get_weight_matrix(imA, imB):
38 | overlap = cv2.bitwise_and(imA, imB)
39 | gray = cv2.cvtColor(overlap, cv2.COLOR_BGR2GRAY)
40 | ret, mask = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
41 | mask = cv2.dilate(mask, np.ones((2, 2), np.uint8), iterations=2)
42 | mask_inv = cv2.bitwise_not(mask)
43 | imA_remain = cv2.bitwise_and(imA, imA, mask=mask_inv)
44 | grayA = cv2.cvtColor(imA_remain, cv2.COLOR_BGR2GRAY)
45 | ret, G = cv2.threshold(grayA, 0, 255, cv2.THRESH_BINARY)
46 | G = G.astype(np.float32) / 255.0
47 | ind = np.where(mask == 255)
48 | lsd = cv2.createLineSegmentDetector(0)
49 | lines = lsd.detect(mask)[0]
50 | lines = sorted(lines, key=linelen, reverse=True)[1::-1]
51 | lx1, ly1, lx2, ly2 = lines[0][0]
52 | mx1, my1, mx2, my2 = lines[1][0]
53 | for y, x in zip(*ind):
54 | d1 = dline(x, y, lx1, ly1, lx2, ly2)
55 | d2 = dline(x, y, mx1, my1, mx2, my2)
56 | d1 *= d1
57 | d2 *= d2
58 | G[y, x] = d1 / (d1 + d2)
59 |
60 | return G
61 |
62 |
63 | def adjust_illuminance(im, a):
64 | return np.minimum((im.astype(np.float) * a), 255).astype(np.uint8)
65 |
66 |
67 | def rgb_ratio(imA, imB):
68 | overlap = cv2.bitwise_and(imA, imB)
69 | gray = cv2.cvtColor(overlap, cv2.COLOR_BGR2GRAY)
70 | ret, mask = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
71 | mask = cv2.dilate(mask, np.ones((5, 5), np.uint8), iterations=4)
72 | imA_here = cv2.bitwise_and(imA, imA, mask=mask)
73 | imB_here = cv2.bitwise_and(imB, imB, mask=mask)
74 | B1, G1, R1 = cv2.split(imA_here)
75 | B2, G2, R2 = cv2.split(imB_here)
76 | c1 = np.mean(B1) / np.mean(B2)
77 | c2 = np.mean(G1) / np.mean(G2)
78 | c3 = np.mean(R1) / np.mean(R2)
79 | return c1, c2, c3
80 |
81 |
82 | def main(save_weights=False):
83 | matrices = []
84 | undistort_maps = []
85 | for conf in camera_params:
86 | with open(conf, "r") as f:
87 | data = yaml.load(f, Loader=yaml.SafeLoader)
88 |
89 | proj_mat = np.array(data["M"])
90 | matrices.append(proj_mat)
91 |
92 | K = np.array(data["K"])
93 | D = np.array(data["D"])
94 | scale = np.array(data["scale"])
95 | new_K = K.copy()
96 | new_K[0, 0] *= scale[0]
97 | new_K[1, 1] *= scale[1]
98 | map1, map2 = cv2.fisheye.initUndistortRectifyMap(
99 | K,
100 | D,
101 | np.eye(3),
102 | new_K,
103 | (W, H),
104 | cv2.CV_16SC2
105 | )
106 | undistort_maps.append((map1, map2))
107 |
108 | images = [cv2.imread(im) for im in camera_images]
109 | for i, image in enumerate(images):
110 | map1, map2 = undistort_maps[i]
111 | images[i] = cv2.remap(image, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
112 |
113 | front = cv2.warpPerspective(
114 | images[0],
115 | matrices[0],
116 | frontShape
117 | )
118 | back = cv2.warpPerspective(
119 | images[1],
120 | matrices[1],
121 | frontShape
122 | )
123 | left = cv2.warpPerspective(
124 | images[2],
125 | matrices[2],
126 | leftShape
127 | )
128 | right = cv2.warpPerspective(
129 | images[3],
130 | matrices[3],
131 | leftShape
132 | )
133 |
134 | back = back[::-1, ::-1, :]
135 | left = cv2.transpose(left)[::-1]
136 | right = np.flip(cv2.transpose(right), 1)
137 |
138 | result = np.zeros((totalHeight, totalWidth, 3), dtype=np.uint8)
139 |
140 | car = cv2.imread(car_image)
141 | car = cv2.resize(car, (x2 - x1, y2 - y1))
142 |
143 | weights = [None] * 4
144 | weights[0] = get_weight_matrix(front[:, :x1], left[:y1])
145 | weights[1] = get_weight_matrix(back[:, :x1], left[y2:])
146 | weights[2] = get_weight_matrix(front[:, x2:], right[:y1])
147 | weights[3] = get_weight_matrix(back[:, x2:], right[y2:])
148 |
149 | stacked_weights = [np.stack((G, G, G), axis=2) for G in weights]
150 |
151 | def merge(imA, imB, k):
152 | G = stacked_weights[k]
153 | return G * imA + imB * (1 - G)
154 |
155 | # top-left
156 | FL = merge(front[:, :x1], left[:y1], 0)
157 | result[:y1, :x1] = FL
158 | # bottom-left
159 | BL = merge(back[:, :x1], left[y2:], 1)
160 | result[y2:, :x1] = BL
161 | # top-right
162 | FR = merge(front[:, x2:], right[:y1], 2)
163 | result[:y1, x2:] = FR
164 | # bottom-right
165 | BR = merge(back[:, x2:], right[y2:], 3)
166 | result[y2:, x2:] = BR
167 |
168 | # front
169 | F = front[:, x1:x2]
170 | result[:y1, x1:x2] = F
171 | # back
172 | B = back[:, x1:x2]
173 | result[y2:, x1:x2] = B
174 | # left
175 | L = left[y1:y2]
176 | result[y1:y2, :x1] = L
177 | # right
178 | R = right[y1:y2]
179 | result[y1:y2, x2:] = R
180 |
181 | a1, a2, a3 = rgb_ratio(right[:y1], front[:, x2:])
182 | b1, b2, b3 = rgb_ratio(back[:, x2:], right[y2:])
183 | c1, c2, c3 = rgb_ratio(left[y2:], back[:, :x1])
184 | d1, d2, d3 = rgb_ratio(front[:, :x1], left[:y1])
185 |
186 | e1 = (a1 + b1 + c1 + d1) / (a1*a1 + b1*b1 + c1*c1 + d1*d1)
187 | e2 = (a2 + b2 + c2 + d2) / (a2*a2 + b2*b2 + c2*c2 + d2*d2)
188 | e3 = (a3 + b3 + c3 + d3) / (a3*a3 + b3*b3 + c3*c3 + d3*d3)
189 |
190 | ch1, ch2, ch3 = cv2.split(result)
191 | ch1 = adjust_illuminance(ch1, e1)
192 | ch2 = adjust_illuminance(ch2, e2)
193 | ch3 = adjust_illuminance(ch3, e3)
194 |
195 | result = cv2.merge((ch1, ch2, ch3))
196 |
197 | result[y1:y2, x1:x2] = car
198 | cv2.imshow("result", result)
199 | cv2.waitKey(0)
200 | cv2.destroyAllWindows()
201 |
202 | if save_weights:
203 | mats = {
204 | "G0": weights[0].tolist(),
205 | "G1": weights[1].tolist(),
206 | "G2": weights[2].tolist(),
207 | "G3": weights[3].tolist(),
208 | }
209 |
210 | with open(os.path.join(work_dir, "weights.yaml"), "w") as f:
211 | yaml.safe_dump(mats, f)
212 |
213 |
214 | if __name__ == "__main__":
215 | main(save_weights=True)
216 |
--------------------------------------------------------------------------------
/stitch_images_using_weightFile.py:
--------------------------------------------------------------------------------
1 | """
2 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 | Stitch four camera images to form the 360 surround view
4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 | """
6 | import numpy as np
7 | import cv2
8 | import yaml
9 | import os
10 | from paramsettings import *
11 |
12 |
13 | W, H = 640, 480
14 | work_dir = "./yaml"
15 | car_image = os.path.join(work_dir, "car.png")
16 | camera_params = [os.path.join(work_dir, f) for f in ("front.yaml", "back.yaml", "left.yaml", "right.yaml")]
17 | camera_images = [os.path.join(work_dir, f) for f in ("front.png", "back.png", "left.png", "right.png")]
18 |
19 |
20 | def dline(x, y, x1, y1, x2, y2):
21 | """Compute a pixel (x, y) to line segment (x1, y1) and (x2, y2).
22 | """
23 | vx, vy = x2 - x1, y2 - y1
24 | dx, dy = x - x1, y - y1
25 | a = np.sqrt(vx * vx + vy * vy)
26 | return abs(vx * dy - dx * vy) / a
27 |
28 |
29 | def linelen(line):
30 | """Compute the length of a line segment.
31 | """
32 | x1, y1, x2, y2 = line[0]
33 | dx, dy = x1 - x2, y1 - y2
34 | return np.sqrt(dx*dx + dy*dy)
35 |
36 |
37 | def get_weight_matrix(imA, imB):
38 | overlap = cv2.bitwise_and(imA, imB)
39 | gray = cv2.cvtColor(overlap, cv2.COLOR_BGR2GRAY)
40 | ret, mask = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
41 | mask = cv2.dilate(mask, np.ones((2, 2), np.uint8), iterations=2)
42 | mask_inv = cv2.bitwise_not(mask)
43 | imA_remain = cv2.bitwise_and(imA, imA, mask=mask_inv)
44 | grayA = cv2.cvtColor(imA_remain, cv2.COLOR_BGR2GRAY)
45 | ret, G = cv2.threshold(grayA, 0, 255, cv2.THRESH_BINARY)
46 | G = G.astype(np.float32) / 255.0
47 | ind = np.where(mask == 255)
48 | lsd = cv2.createLineSegmentDetector(0)
49 | lines = lsd.detect(mask)[0]
50 | lines = sorted(lines, key=linelen, reverse=True)[1::-1]
51 | lx1, ly1, lx2, ly2 = lines[0][0]
52 | mx1, my1, mx2, my2 = lines[1][0]
53 | for y, x in zip(*ind):
54 | d1 = dline(x, y, lx1, ly1, lx2, ly2)
55 | d2 = dline(x, y, mx1, my1, mx2, my2)
56 | d1 *= d1
57 | d2 *= d2
58 | G[y, x] = d1 / (d1 + d2)
59 |
60 | return G
61 |
62 |
63 | def adjust_illuminance(im, a):
64 | return np.minimum((im.astype(np.float) * a), 255).astype(np.uint8)
65 |
66 |
67 | def rgb_ratio(imA, imB):
68 | overlap = cv2.bitwise_and(imA, imB)
69 | gray = cv2.cvtColor(overlap, cv2.COLOR_BGR2GRAY)
70 | ret, mask = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
71 | mask = cv2.dilate(mask, np.ones((5, 5), np.uint8), iterations=4)
72 | imA_here = cv2.bitwise_and(imA, imA, mask=mask)
73 | imB_here = cv2.bitwise_and(imB, imB, mask=mask)
74 | B1, G1, R1 = cv2.split(imA_here)
75 | B2, G2, R2 = cv2.split(imB_here)
76 | c1 = np.mean(B1) / np.mean(B2)
77 | c2 = np.mean(G1) / np.mean(G2)
78 | c3 = np.mean(R1) / np.mean(R2)
79 | return c1, c2, c3
80 |
81 |
82 | def main(save_weights=False):
83 | matrices = []
84 | undistort_maps = []
85 | for conf in camera_params:
86 | with open(conf, "r") as f:
87 | data = yaml.load(f, Loader=yaml.SafeLoader)
88 |
89 | proj_mat = np.array(data["M"])
90 | matrices.append(proj_mat)
91 |
92 | K = np.array(data["K"])
93 | D = np.array(data["D"])
94 | scale = np.array(data["scale"])
95 | new_K = K.copy()
96 | new_K[0, 0] *= scale[0]
97 | new_K[1, 1] *= scale[1]
98 | map1, map2 = cv2.fisheye.initUndistortRectifyMap(
99 | K,
100 | D,
101 | np.eye(3),
102 | new_K,
103 | (W, H),
104 | cv2.CV_16SC2
105 | )
106 | undistort_maps.append((map1, map2))
107 |
108 | images = [cv2.imread(im) for im in camera_images]
109 | for i, image in enumerate(images):
110 | map1, map2 = undistort_maps[i]
111 | images[i] = cv2.remap(image, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
112 |
113 | front = cv2.warpPerspective(
114 | images[0],
115 | matrices[0],
116 | frontShape
117 | )
118 | back = cv2.warpPerspective(
119 | images[1],
120 | matrices[1],
121 | frontShape
122 | )
123 | left = cv2.warpPerspective(
124 | images[2],
125 | matrices[2],
126 | leftShape
127 | )
128 | right = cv2.warpPerspective(
129 | images[3],
130 | matrices[3],
131 | leftShape
132 | )
133 |
134 | back = back[::-1, ::-1, :]
135 | left = cv2.transpose(left)[::-1]
136 | right = np.flip(cv2.transpose(right), 1)
137 |
138 | result = np.zeros((totalHeight, totalWidth, 3), dtype=np.uint8)
139 |
140 | car = cv2.imread(car_image)
141 | car = cv2.resize(car, (x2 - x1, y2 - y1))
142 |
143 | weights = [None] * 4
144 | """
145 | weights[0] = get_weight_matrix(front[:, :x1], left[:y1])
146 | weights[1] = get_weight_matrix(back[:, :x1], left[y2:])
147 | weights[2] = get_weight_matrix(front[:, x2:], right[:y1])
148 | weights[3] = get_weight_matrix(back[:, x2:], right[y2:])
149 | """
150 |
151 | # added by Holy 2006061032
152 | strWeightPathName = "d:/backup/project/surround-view-system-introduction-imported/yaml/weights.yaml"
153 | with open(strWeightPathName, "r") as f:
154 | dataWeights = yaml.load(f, Loader=yaml.SafeLoader)
155 |
156 | weights[0] = np.array(dataWeights["G0"])
157 | weights[1] = np.array(dataWeights["G1"])
158 | weights[2] = np.array(dataWeights["G2"])
159 | weights[3] = np.array(dataWeights["G3"])
160 | # end of addition 2006061032
161 |
162 | stacked_weights = [np.stack((G, G, G), axis=2) for G in weights]
163 |
164 | def merge(imA, imB, k):
165 | G = stacked_weights[k]
166 | return G * imA + imB * (1 - G)
167 |
168 | # top-left
169 | FL = merge(front[:, :x1], left[:y1], 0)
170 | result[:y1, :x1] = FL
171 | # bottom-left
172 | BL = merge(back[:, :x1], left[y2:], 1)
173 | result[y2:, :x1] = BL
174 | # top-right
175 | FR = merge(front[:, x2:], right[:y1], 2)
176 | result[:y1, x2:] = FR
177 | # bottom-right
178 | BR = merge(back[:, x2:], right[y2:], 3)
179 | result[y2:, x2:] = BR
180 |
181 | # front
182 | F = front[:, x1:x2]
183 | result[:y1, x1:x2] = F
184 | # back
185 | B = back[:, x1:x2]
186 | result[y2:, x1:x2] = B
187 | # left
188 | L = left[y1:y2]
189 | result[y1:y2, :x1] = L
190 | # right
191 | R = right[y1:y2]
192 | result[y1:y2, x2:] = R
193 |
194 | a1, a2, a3 = rgb_ratio(right[:y1], front[:, x2:])
195 | b1, b2, b3 = rgb_ratio(back[:, x2:], right[y2:])
196 | c1, c2, c3 = rgb_ratio(left[y2:], back[:, :x1])
197 | d1, d2, d3 = rgb_ratio(front[:, :x1], left[:y1])
198 |
199 | e1 = (a1 + b1 + c1 + d1) / (a1*a1 + b1*b1 + c1*c1 + d1*d1)
200 | e2 = (a2 + b2 + c2 + d2) / (a2*a2 + b2*b2 + c2*c2 + d2*d2)
201 | e3 = (a3 + b3 + c3 + d3) / (a3*a3 + b3*b3 + c3*c3 + d3*d3)
202 |
203 | ch1, ch2, ch3 = cv2.split(result)
204 | ch1 = adjust_illuminance(ch1, e1)
205 | ch2 = adjust_illuminance(ch2, e2)
206 | ch3 = adjust_illuminance(ch3, e3)
207 |
208 | result = cv2.merge((ch1, ch2, ch3))
209 |
210 | result[y1:y2, x1:x2] = car
211 | cv2.imshow("result", result)
212 | cv2.waitKey(0)
213 | cv2.destroyAllWindows()
214 |
215 | if save_weights:
216 | mats = {
217 | "G0": weights[0].tolist(),
218 | "G1": weights[1].tolist(),
219 | "G2": weights[2].tolist(),
220 | "G3": weights[3].tolist(),
221 | }
222 |
223 | with open(os.path.join(work_dir, "weights.yaml"), "w") as f:
224 | yaml.safe_dump(mats, f)
225 |
226 |
227 | if __name__ == "__main__":
228 | main(save_weights=False)
229 |
--------------------------------------------------------------------------------
/surroundview.py:
--------------------------------------------------------------------------------
1 | """
2 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 | Show the projected 360 surround view of four cameras
4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 |
6 | Press Esc to exit.
7 | """
8 | import numpy as np
9 | import cv2
10 | import yaml
11 | import os
12 | import time
13 | from imutils.video import VideoStream
14 | from paramsettings import *
15 |
16 |
17 | # -----------------
18 | # Global parameters
19 | # -----------------
20 | W, H = 640, 480 # camera resolution
21 | window_title = "Carview"
22 | camera_devices = [0, 1, 2, 3] # camera devices front, back, left, right
23 | yaml_dir = "./yaml"
24 | camera_params = [os.path.join(yaml_dir, f) for f in ("front.yaml", "back.yaml", "left.yaml", "right.yaml")]
25 | weights_file = os.path.join(yaml_dir, "weights.yaml")
26 | car_icon = os.path.join(yaml_dir, "car.png")
27 |
28 |
29 | def adjust_illuminance(im, a):
30 | return np.minimum((im.astype(np.float) * a), 255).astype(np.uint8)
31 |
32 |
33 | def rgb_ratio(imA, imB):
34 | overlap = cv2.bitwise_and(imA, imB)
35 | gray = cv2.cvtColor(overlap, cv2.COLOR_BGR2GRAY)
36 | ret, mask = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
37 | mask = cv2.dilate(mask, np.ones((5, 5), np.uint8), iterations=4)
38 | imA_here = cv2.bitwise_and(imA, imA, mask=mask)
39 | imB_here = cv2.bitwise_and(imB, imB, mask=mask)
40 | B1, G1, R1 = cv2.split(imA_here)
41 | B2, G2, R2 = cv2.split(imB_here)
42 | c1 = np.mean(B1) / np.mean(B2)
43 | c2 = np.mean(G1) / np.mean(G2)
44 | c3 = np.mean(R1) / np.mean(R2)
45 | return c1, c2, c3
46 |
47 |
48 | def main():
49 | print("[INFO] openning camera devices...")
50 | captures = [VideoStream(src=k, resolution=(W, H)).start() for k in camera_devices]
51 | time.sleep(2)
52 | print("[INFO] loading camera intrinsic parameters...")
53 | matrices = []
54 | undistort_maps = []
55 | for conf in camera_params:
56 | with open(conf, "r") as f:
57 | data = yaml.load(f, Loader=yaml.SafeLoader)
58 |
59 | proj_mat = np.array(data["M"])
60 | matrices.append(proj_mat)
61 |
62 | K = np.array(data["K"])
63 | D = np.array(data["D"])
64 | scale = np.array(data["scale"])
65 | new_K = K.copy()
66 | new_K[0, 0] *= scale[0]
67 | new_K[1, 1] *= scale[1]
68 | map1, map2 = cv2.fisheye.initUndistortRectifyMap(
69 | K,
70 | D,
71 | np.eye(3),
72 | new_K,
73 | (W, H),
74 | cv2.CV_16SC2
75 | )
76 | undistort_maps.append((map1, map2))
77 |
78 | print("[INFO] loading weight matrices...")
79 | with open(weights_file, "r") as f:
80 | data = yaml.load(f)
81 | G0 = np.array(data["G0"])
82 | G1 = np.array(data["G1"])
83 | G2 = np.array(data["G2"])
84 | G3 = np.array(data["G3"])
85 | weights = [np.stack((G, G, G), axis=2) for G in (G0, G1, G2, G3)]
86 |
87 | def merge(imA, imB, k):
88 | G = weights[k]
89 | return G * imA + imB * (1 - G)
90 |
91 | car = cv2.imread(car_icon)
92 | car = cv2.resize(car, (x2 - x1, y2 - y1))
93 |
94 | cv2.namedWindow(window_title)
95 | cv2.moveWindow(window_title, 1500, 0)
96 | result = np.zeros((totalHeight, totalWidth, 3), np.uint8)
97 |
98 | while True:
99 | frames = []
100 | for i, cap in enumerate(captures):
101 | frame = cap.read()
102 | map1, map2 = undistort_maps[i]
103 | frame = cv2.remap(frame, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
104 | frames.append(frame)
105 |
106 | front = cv2.warpPerspective(
107 | frames[0],
108 | matrices[0],
109 | frontShape
110 | )
111 | back = cv2.warpPerspective(
112 | frames[1],
113 | matrices[1],
114 | frontShape
115 | )
116 | left = cv2.warpPerspective(
117 | frames[2],
118 | matrices[2],
119 | leftShape
120 | )
121 | right = cv2.warpPerspective(
122 | frames[3],
123 | matrices[3],
124 | leftShape
125 | )
126 |
127 | # flip the images
128 | back = back[::-1, ::-1, :]
129 | left = cv2.transpose(left)[::-1]
130 | right = np.flip(cv2.transpose(right), 1)
131 |
132 | # top-left
133 | FL = merge(front[:, :x1], left[:y1], 0)
134 | result[:y1, :x1] = FL
135 | # bottom-left
136 | BL = merge(back[:, :x1], left[y2:], 1)
137 | result[y2:, :x1] = BL
138 | # top-right
139 | FR = merge(front[:, x2:], right[:y1], 2)
140 | result[:y1, x2:] = FR
141 | # bottom-right
142 | BR = merge(back[:, x2:], right[y2:], 3)
143 | result[y2:, x2:] = BR
144 |
145 | # front
146 | F = front[:, x1:x2]
147 | result[:y1, x1:x2] = F
148 | # back
149 | B = back[:, x1:x2]
150 | result[y2:, x1:x2] = B
151 | # left
152 | L = left[y1:y2]
153 | result[y1:y2, :x1] = L
154 | # right
155 | R = right[y1:y2]
156 | result[y1:y2, x2:] = R
157 |
158 | a1, a2, a3 = rgb_ratio(right[:y1], front[:, x2:])
159 | b1, b2, b3 = rgb_ratio(back[:, x2:], right[y2:])
160 | c1, c2, c3 = rgb_ratio(left[y2:], back[:, :x1])
161 | d1, d2, d3 = rgb_ratio(front[:, :x1], left[:y1])
162 |
163 | e1 = (a1 + b1 + c1 + d1) / (a1*a1 + b1*b1 + c1*c1 + d1*d1)
164 | e2 = (a2 + b2 + c2 + d2) / (a2*a2 + b2*b2 + c2*c2 + d2*d2)
165 | e3 = (a3 + b3 + c3 + d3) / (a3*a3 + b3*b3 + c3*c3 + d3*d3)
166 |
167 | ch1, ch2, ch3 = cv2.split(result)
168 | ch1 = adjust_illuminance(ch1, e1)
169 | ch2 = adjust_illuminance(ch2, e2)
170 | ch3 = adjust_illuminance(ch3, e3)
171 |
172 | result = cv2.merge((ch1, ch2, ch3))
173 | result[y1:y2, x1:x2] = car # add car icon
174 | cv2.imshow(window_title, result)
175 |
176 | key = cv2.waitKey(1) & 0xFF
177 | if key == 27:
178 | break
179 |
180 | time.sleep(2)
181 |
182 | for cap in captures:
183 | cap.stop()
184 |
185 | cv2.destroyAllWindows()
186 |
187 |
188 | if __name__ == "__main__":
189 | main()
190 |
--------------------------------------------------------------------------------
/testPythonCmd.py:
--------------------------------------------------------------------------------
1 | """
2 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 | Show all four cameras and their device numbers
4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 | """
6 | import cv2
7 | import numpy as np
8 | import yaml
9 | from imutils.video import VideoStream
10 |
11 |
12 | # # usb devices of the four cameras
13 | # camera_devices = [0, 1, 2, 3]
14 | # # intrinsic parameter files of the cameras
15 | # camera_files = ["./yaml/front.yaml",
16 | # "./yaml/back.yaml",
17 | # "./yaml/left.yaml",
18 | # "./yaml/right.yaml"]
19 |
20 | # added by Holy 2006050822
21 | image_files = ["./yaml/front.png",
22 | "./yaml/back.png",
23 | "./yaml/left.png",
24 | "./yaml/right.png"]
25 |
26 | img = cv2.imread(image_files[0], cv2.IMREAD_COLOR)
27 | cv2.imshow('image',img)
28 | cv2.waitKey(0)
29 | cv2.destroyAllWindows()
30 | # end of addition 2006050822
31 |
32 | W, H = 9, 7
33 | corrected = np.zeros((2*H, 2*W, 3), dtype=np.uint8)
34 | region = {0: corrected[:H, :W],
35 | 1: corrected[:H, W:],
36 | 2: corrected[H:, :W],
37 | 3: corrected[H:, W:]}
38 |
39 | print(region[0][...])
40 |
--------------------------------------------------------------------------------
/yaml/back.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/back.png
--------------------------------------------------------------------------------
/yaml/back.yaml:
--------------------------------------------------------------------------------
1 | D:
2 | - [-0.043263574402770615]
3 | - [-0.001446091708668514]
4 | - [-0.0027155890235892968]
5 | - [0.0005723510480587911]
6 | K:
7 | - [249.81129262808807, 0.0, 337.92793937816214]
8 | - [0.0, 249.7339384472654, 214.64322996687085]
9 | - [0.0, 0.0, 1.0]
10 | dim: [640, 480]
11 | M:
12 | - - -0.7429225965326749
13 | - -1.7018137967046907
14 | - 519.7317086287088
15 | - - -0.02338970494695472
16 | - -1.7761278885102918
17 | - 477.8339524493056
18 | - - -8.581716022933666e-05
19 | - -0.006073301924629371
20 | - 1.0
21 | scale:
22 | - 1.0
23 | - 1.0
24 |
--------------------------------------------------------------------------------
/yaml/bad.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/bad.png
--------------------------------------------------------------------------------
/yaml/car.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/car.png
--------------------------------------------------------------------------------
/yaml/choose_back.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/choose_back.png
--------------------------------------------------------------------------------
/yaml/choose_front.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/choose_front.png
--------------------------------------------------------------------------------
/yaml/choose_left.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/choose_left.png
--------------------------------------------------------------------------------
/yaml/choose_right.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/choose_right.png
--------------------------------------------------------------------------------
/yaml/corners.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/corners.png
--------------------------------------------------------------------------------
/yaml/detect_lines.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/detect_lines.png
--------------------------------------------------------------------------------
/yaml/front.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/front.png
--------------------------------------------------------------------------------
/yaml/front.yaml:
--------------------------------------------------------------------------------
1 | D:
2 | - [-0.041974899215886055]
3 | - [-0.0007701700980729033]
4 | - [-0.0048108799460396855]
5 | - [0.0013087985857993222]
6 | K:
7 | - [248.9188754427965, 0.0, 330.4908866021852]
8 | - [0.0, 248.78073353329333, 198.3587052003313]
9 | - [0.0, 0.0, 1.0]
10 | dim: [640, 480]
11 | M:
12 | - - -0.812326614976658
13 | - -1.8687702895046934
14 | - 544.295134918297
15 | - - -0.011376685162088482
16 | - -2.030457851553983
17 | - 506.778576291568
18 | - - -5.059537016785419e-05
19 | - -0.006752104007553494
20 | - 1.0
21 | scale:
22 | - 1.0
23 | - 1.0
24 |
--------------------------------------------------------------------------------
/yaml/front_proj.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/front_proj.png
--------------------------------------------------------------------------------
/yaml/left.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/left.png
--------------------------------------------------------------------------------
/yaml/left.yaml:
--------------------------------------------------------------------------------
1 | D:
2 | - [-0.0448806707630175]
3 | - [-0.0016099766128655876]
4 | - [-0.0011954400503024292]
5 | - [-0.00014431528612720597]
6 | K:
7 | - [249.48596236629942, 0.0, 327.16000910284095]
8 | - [0.0, 249.52362160183515, 186.7003433621995]
9 | - [0.0, 0.0, 1.0]
10 | dim: [640, 480]
11 | M:
12 | - - -2.378448677085651
13 | - -5.960593125380759
14 | - 1124.4799203308971
15 | - - 0.07262035733367318
16 | - -5.786640073028893
17 | - 1039.3268044295887
18 | - - 0.00036702922407301855
19 | - -0.018851348762416
20 | - 1.0
21 | scale:
22 | - 1.0
23 | - 1.0
24 |
--------------------------------------------------------------------------------
/yaml/left_proj.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/left_proj.png
--------------------------------------------------------------------------------
/yaml/mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/mask.png
--------------------------------------------------------------------------------
/yaml/mask_after_dilate.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/mask_after_dilate.png
--------------------------------------------------------------------------------
/yaml/mask_before_dilate.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/mask_before_dilate.png
--------------------------------------------------------------------------------
/yaml/original.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/original.png
--------------------------------------------------------------------------------
/yaml/overlap.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/overlap.png
--------------------------------------------------------------------------------
/yaml/overlap_gray.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/overlap_gray.png
--------------------------------------------------------------------------------
/yaml/result.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/result.png
--------------------------------------------------------------------------------
/yaml/right.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/right.png
--------------------------------------------------------------------------------
/yaml/right.yaml:
--------------------------------------------------------------------------------
1 | D:
2 | - [-0.04159126343209537]
3 | - [-0.001916587917814143]
4 | - [-0.00447453939643276]
5 | - [0.0015660457173438222]
6 | K:
7 | - [249.210115988049, 0.0, 321.6690632336225]
8 | - [0.0, 249.13375696210358, 203.40753679545696]
9 | - [0.0, 0.0, 1.0]
10 | dim: [640, 480]
11 | M:
12 | - - -1.395381788692386
13 | - -3.4757640566778423
14 | - 779.704398389773
15 | - - -0.016166158133074305
16 | - -3.2410595309173695
17 | - 689.3057366747627
18 | - - -4.400902825002655e-05
19 | - -0.010635862831607777
20 | - 1.0
21 | scale:
22 | - 1.0
23 | - 1.0
24 |
--------------------------------------------------------------------------------
/yaml/weight_for_A.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/holy44162/surround-view-system-introduction-imported/dfd063a82a8290f5e87a18364bc08fad48cf7add/yaml/weight_for_A.png
--------------------------------------------------------------------------------