├── .gitignore ├── README.md ├── images ├── test1_mask.jpg ├── test1_src.jpg ├── test1_target.jpg ├── test2_mask.png ├── test2_src.png ├── test2_target.png ├── test3_mask.jpg ├── test3_mask2.jpg ├── test3_src.jpg ├── test3_target.jpg ├── test4_mask.jpg ├── test4_src.jpg ├── test5_mask.jpg └── test5_src.jpg ├── paper.pdf ├── res ├── test1_naive_result.jpg ├── test1_result.jpg ├── test2_naive_result.png ├── test2_result.png ├── test3_mix_result.jpg ├── test3_naive_result.jpg ├── test3_result.jpg ├── test4_flatten_result.jpg └── test5_illu_result.jpg └── src ├── __init__.py ├── main.py ├── myMathFunc.py ├── poisson.py ├── postprocessing.py ├── preprocessing.py └── test.py /.gitignore: -------------------------------------------------------------------------------- 1 | others/ 2 | *.pyc -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 实验报告 2 | 3 | ## 背景 4 | 5 | 论文:《Poisson Image Editing》 6 | 7 | 本人实现代码github仓库:https://github.com/datawine/poisson_image_editing 8 | 9 | 作为一种图像融合算法,Poisson Image Editing通过基于解决泊松方程的经典的插值算法,构建了两类创新的工具,一类工具可以用来进行图片之间的图像融合,另一类工具可以用来在同一张图片中选定区域进行相应操作。 10 | 11 | ## 算法原理 12 | 13 | 设$\Omega​$是从A图中拷贝出来的一部分,我们需要将$\Omega ​$粘贴到B图上的某个部位,$\partial \Omega​$是其边界,S区域属于B图。为了从视觉上消除边界,可以固定$\partial \Omega​$上的颜色等于B图中的颜色,而$\Omega​$内部的颜色梯度保持与A图一致。 14 | 15 | 可以用数学公式表示: 16 | 17 | $min_f \iint_\Omega |\bigtriangledown f - \bigtriangledown g|^2, \ f|_{\partial \Omega}=f^*|_{\partial \Omega}​$ 18 | 19 | 其中$\bigtriangledown=[\frac{\partial .}{\partial x}, \frac{\partial .}{\partial y}]$为梯度运算符。 20 | 21 | 上述方程就是一个带狄利克雷边界条件的泊松方程。 22 | 23 | ### 具体应用 24 | 25 | #### 普通的泊松融合 26 | 27 | 最普通的泊松融合就是通过解决上面的方程,让拷贝出来的patch的边界的像素值为目标图片的像素值,从而解这个泊松方程。 28 | 29 | #### 混合梯度的泊松融合 30 | 31 | 如果我们的目标图片有比较明显的纹理,那么我们需要在解方程的时候考虑到这一点。这时我们可以修改方程为: 32 | 33 | $min_f \iint_\Omega |\bigtriangledown f - v|^2, \ f|_{\partial \Omega}=f^*|_{\partial \Omega}​$ 34 | 35 | 其中$v=max(\bigtriangledown f^*, \bigtriangledown g)$。简单地说,就是在图像的重叠部分,选取更大的梯度。 36 | 37 | #### 局部磨平纹理 38 | 39 | 局部磨平纹理的操作是在一张图片上进行的。同样也是修改相应的梯度部分,在检测到的边缘附近保留原来的梯度,在其他地方选择将梯度变为0,即修改方程为: 40 | 41 | $min_f \iint_\Omega |\bigtriangledown f - v|^2, \ f|_{\partial \Omega}=f^*|_{\partial \Omega}​$ 42 | 43 | 其中$v=f_p^*-f_q^*,if \ q和p之间存在明显的边,else\ 0$ 44 | 45 | #### 局部亮度调整 46 | 47 | 局部亮度调整同样也是在一张图片上进行的。我们需要对选取的对应区域进行亮度的平均,即加强不明显的梯度变化,抑制明显的梯度变化。由论文中的经验公式,可以修改方程为: 48 | 49 | $min_f \iint_\Omega |\bigtriangledown f - v|^2, \ f|_{\partial \Omega}=f^*|_{\partial \Omega}$ 50 | 51 | 其中$v=\alpha ^\beta |\bigtriangledown f^*|^{-\beta}\bigtriangledown f^*$,其中$\alpha$是$\Omega $区域内的平均梯度的`0.2`倍,$\beta=0.2$。 52 | 53 | ## 实现细节及结果 54 | 55 | ### 项目代码解释 56 | 57 | - main.py 58 | - 主函数入口 59 | - poisson.py 60 | - 包含Poisson类,进行泊松融合的相关操作 61 | - postprocessing.py 62 | - 后处理,即生成处理后的图片 63 | - preprocessing.py 64 | - 预处理,即生成处理所需要的数据 65 | - myMathFunc.py 66 | - 一些数学函数,用来解泊松方程 67 | 68 | ### 结果 69 | 70 | #### 普通的泊松融合 71 | 72 | 源图片和mask: 73 | 74 | ![](./images/test1_src.jpg)![](./images/test1_mask.jpg) 75 | 76 | 背景图片: 77 | 78 | ![](./images/test1_target.jpg) 79 | 80 | 直接混合的图片: 81 | 82 | ![](./res/test1_naive_result.jpg) 83 | 84 | 经过泊松融合的图片: 85 | 86 | ![](./res/test1_result.jpg) 87 | 88 | #### 混合梯度的泊松融合 89 | 90 | 源图片: 91 | 92 | ![](./images/test3_src.jpg) 93 | 94 | mask: 95 | 96 | ![](./images/test3_mask.jpg) 97 | 98 | 背景图片: 99 | 100 | ![](./images/test3_target.jpg) 101 | 102 | 直接融合的图片: 103 | 104 | ![](./res/test3_naive_result.jpg) 105 | 106 | 经过泊松融合的图片: 107 | 108 | ![](./res/test3_result.jpg) 109 | 110 | 经过泊松混合梯度融合的图片: 111 | 112 | ![](./res/test3_mix_result.jpg) 113 | 114 | #### 局部磨平纹理 115 | 116 | 源图片: 117 | 118 | ![](./images/test4_src.jpg) 119 | 120 | mask: 121 | 122 | ![](./images/test4_mask.jpg) 123 | 124 | 结果: 125 | 126 | ![](./res/test4_flatten_result.jpg) 127 | 128 | #### 局部亮度调整 129 | 130 | 源图片: 131 | 132 | ![](./images/test5_src.jpg) 133 | 134 | mask: 135 | 136 | ![](./images/test5_mask.jpg) 137 | 138 | 结果: 139 | 140 | ![](./res/test5_illu_result.jpg) 141 | 142 | ## 优缺点及改进方向 143 | 144 | ### 优点 145 | 146 | - 对于大部分图像来说,泊松融合的效果很好 147 | - 速度较快,只需要求解一个稀疏方程 148 | - 自由度高,能够对方程进行各种变化,满足不同的需求 149 | 150 | ### 缺点 151 | 152 | - 算子单一,目前采用的是梯度的方法进行计算,在一些梯度较大的需要剔除的部分(如草地等)可能无法识别 153 | - 方程如果不够精确,容易产生三角面片 154 | 155 | ### 改进方法 156 | 157 | - 引入显著性区域检测等手段,将背景与目标分开 158 | 159 | ## 实验感想 160 | 161 | 泊松图像融合作为一种非常成功的图像融合算法,拥有非常好的实用性和普遍性,在完成的过程中也发现泊松图像融合的应用非常广泛,不限于“融合”,对单张图像的操作也有非常多能够探索的空间。 162 | 163 | -------------------------------------------------------------------------------- /images/test1_mask.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test1_mask.jpg -------------------------------------------------------------------------------- /images/test1_src.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test1_src.jpg -------------------------------------------------------------------------------- /images/test1_target.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test1_target.jpg -------------------------------------------------------------------------------- /images/test2_mask.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test2_mask.png -------------------------------------------------------------------------------- /images/test2_src.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test2_src.png -------------------------------------------------------------------------------- /images/test2_target.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test2_target.png -------------------------------------------------------------------------------- /images/test3_mask.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test3_mask.jpg -------------------------------------------------------------------------------- /images/test3_mask2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test3_mask2.jpg -------------------------------------------------------------------------------- /images/test3_src.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test3_src.jpg -------------------------------------------------------------------------------- /images/test3_target.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test3_target.jpg -------------------------------------------------------------------------------- /images/test4_mask.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test4_mask.jpg -------------------------------------------------------------------------------- /images/test4_src.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test4_src.jpg -------------------------------------------------------------------------------- /images/test5_mask.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test5_mask.jpg -------------------------------------------------------------------------------- /images/test5_src.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/images/test5_src.jpg -------------------------------------------------------------------------------- /paper.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/paper.pdf -------------------------------------------------------------------------------- /res/test1_naive_result.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test1_naive_result.jpg -------------------------------------------------------------------------------- /res/test1_result.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test1_result.jpg -------------------------------------------------------------------------------- /res/test2_naive_result.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test2_naive_result.png -------------------------------------------------------------------------------- /res/test2_result.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test2_result.png -------------------------------------------------------------------------------- /res/test3_mix_result.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test3_mix_result.jpg -------------------------------------------------------------------------------- /res/test3_naive_result.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test3_naive_result.jpg -------------------------------------------------------------------------------- /res/test3_result.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test3_result.jpg -------------------------------------------------------------------------------- /res/test4_flatten_result.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test4_flatten_result.jpg -------------------------------------------------------------------------------- /res/test5_illu_result.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/res/test5_illu_result.jpg -------------------------------------------------------------------------------- /src/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/datawine/poisson_image_editing/1a698236675dee87bcf115c848f7f5a491f414cb/src/__init__.py -------------------------------------------------------------------------------- /src/main.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | 4 | from poisson import * 5 | 6 | if __name__ == '__main__': 7 | # offset = [-120, 0] 8 | # offset = [140, 150] 9 | offset = [150, 150] 10 | srcimg = cv2.imread("../images/test5_src.jpg") 11 | srcmask = cv2.imread("../images/test5_mask.jpg") 12 | # dstimg = cv2.imread("../images/test3_target.jpg") 13 | dstimg = None 14 | poissonOperator = Poisson(offset, srcimg, srcmask, dstimg) 15 | # output, naive_output = poissonOperator.process("merge3") 16 | output = poissonOperator.single_pic_process("illu") 17 | cv2.imwrite("../res/test5_illu_result.jpg", output) 18 | # cv2.imwrite("../res/test3_naive_result.jpg", naive_output) 19 | -------------------------------------------------------------------------------- /src/myMathFunc.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import scipy.fftpack 4 | import scipy.ndimage 5 | from scipy.sparse.linalg import spsolve 6 | from scipy.sparse import csc_matrix 7 | 8 | def DST(x): 9 | return scipy.fftpack.dst(x, type=1, axis=0) / 2.0 10 | 11 | def IDST(X): 12 | x = np.real(scipy.fftpack.idst(X, type=1, axis=0)) 13 | return x / (X.shape[0] + 1.0) 14 | 15 | def genGradient(im): 16 | rows, cols = im.shape 17 | dx = np.zeros((rows, cols), dtype=np.float32) 18 | dy = np.zeros((rows, cols), dtype=np.float32) 19 | for i in range(rows - 1): 20 | for j in range(cols - 1): 21 | dx[i, j] = im[i, j + 1] - im[i, j] 22 | dy[i, j] = im[i + 1, j] - im[i, j] 23 | return dx, dy 24 | 25 | def genLaplacian(dx, dy): 26 | rows, cols = dx.shape 27 | lapx = np.zeros((rows, cols), dtype=np.float32) 28 | lapy = np.zeros((rows, cols), dtype=np.float32) 29 | for i in range(1, rows): 30 | for j in range(1, cols): 31 | lapx[i, j] = dx[i, j] - dx[i, j - 1] 32 | lapy[i, j] = dy[i, j] - dy[i - 1, j] 33 | return lapx + lapy 34 | 35 | def solvePoisson(lap, img): 36 | img = img.astype('float32') 37 | rows, cols = img.shape 38 | img[1:-1, 1:-1] = 0 39 | 40 | L_bp = np.zeros_like(lap) 41 | L_bp[1:-1, 1:-1] = -4 * img[1:-1, 1:-1] \ 42 | + img[1:-1, 2:] + img[1:-1, 0:-2] \ 43 | + img[2:, 1:-1] + img[0:-2, 1:-1] 44 | L = (lap - L_bp)[1:-1, 1:-1] 45 | L_dst = DST(DST(L).T).T 46 | 47 | [xx, yy] = np.meshgrid(np.arange(1,cols-1), np.arange(1,rows-1)) 48 | D = (2 * np.cos(np.pi * xx/(cols - 1)) - 2) + (2 * np.cos(np.pi * yy/(rows - 1)) - 2) 49 | L_dst = L_dst / D 50 | 51 | img_interior = IDST(IDST(L_dst).T).T 52 | img = img.copy() 53 | img[1:-1, 1:-1] = img_interior 54 | 55 | return img 56 | -------------------------------------------------------------------------------- /src/poisson.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import scipy.fftpack 4 | import scipy.ndimage 5 | from scipy.sparse.linalg import spsolve 6 | from scipy.sparse import csc_matrix 7 | import time 8 | 9 | from myMathFunc import * 10 | from preprocessing import * 11 | from postprocessing import * 12 | 13 | class Poisson(): 14 | def __init__(self, offset, srcimg, srcmask, dstimg): 15 | self.offset = offset 16 | self.srcimg = srcimg 17 | self.srcmask = srcmask 18 | self.dstimg = dstimg 19 | self.tl, self.br = getRect(self.srcmask) 20 | self.topimg = self.srcimg[self.tl[0]:self.br[0], self.tl[1]:self.br[1], :] 21 | self.msk = self.srcmask[self.tl[0]:self.br[0], self.tl[1]:self.br[1], :] 22 | 23 | if self.dstimg: 24 | self._tl, self._br = genDstRect(self.tl, self.br, self.offset) 25 | self.groundimg = self.dstimg[self._tl[0]:self._br[0], self._tl[1]:self._br[1], :] 26 | 27 | def process(self, proc_type): 28 | if proc_type == "merge1": 29 | self.patch = self.imageFusion() 30 | elif proc_type == "merge2": 31 | self.patch = self.imageFusion2() 32 | elif proc_type == "merge3": 33 | self.patch = self.imageFusion3() 34 | 35 | res = genRes(self.dstimg, self.patch, self._tl) 36 | naiveres = genNaiveRes(self.dstimg, self.topimg, self.msk, self._tl) 37 | return res, naiveres 38 | 39 | def single_pic_process(self, proc_type): 40 | if proc_type == "flatten": 41 | self.patch = self.imageFusion4() 42 | elif proc_type == "illu": 43 | self.patch = self.imageFusion5() 44 | res = genRes(self.srcimg, self.patch, self.tl) 45 | return res 46 | 47 | def imageFusion(self): 48 | rows, cols, dims = self.topimg.shape 49 | 50 | im1 = self.topimg.copy().astype('float32') 51 | im2 = self.groundimg.copy().astype('float32') 52 | im_res = np.zeros((rows, cols, dims), dtype=np.float32) 53 | 54 | for dim in range(dims): 55 | [dx1, dy1] = genGradient(im1[:, :, dim]) 56 | [dx2, dy2] = genGradient(im2[:, :, dim]) 57 | 58 | dx, dy = dx2.copy(), dy2.copy() 59 | for i in range(rows): 60 | for j in range(cols): 61 | if self.msk[i, j, dim] != 0: 62 | dx[i, j], dy[i, j] = dx1[i, j], dy1[i, j] 63 | 64 | lap = genLaplacian(dx, dy) 65 | im_res[:, :, dim] = np.clip(solvePoisson(lap, im2[:, :, dim]), 0, 255) 66 | return im_res.astype('uint8') 67 | 68 | def imageFusion2(self): 69 | rows, cols, dims = self.topimg.shape 70 | 71 | im1 = self.topimg.copy().astype('float32') 72 | im2 = self.groundimg.copy().astype('float32') 73 | im_res = np.zeros((rows, cols, dims), dtype=np.float32) 74 | 75 | for dim in range(dims): 76 | [dx1, dy1] = genGradient(im1[:, :, dim]) 77 | [dx2, dy2] = genGradient(im2[:, :, dim]) 78 | dx, dy = dx2.copy(), dy2.copy() 79 | 80 | A = np.zeros((rows * cols, rows * cols), dtype=np.float32) 81 | b = np.zeros((rows * cols, ), dtype=np.float32) 82 | r = 0 83 | for i in range(rows): 84 | for j in range(cols): 85 | if self.msk[i, j, dim] != 0: 86 | dx[i, j], dy[i, j] = dx1[i, j], dy1[i, j] 87 | A[r, i * cols + j] = -4 88 | if i > 0: 89 | A[r, (i - 1) * cols + j] = 1 90 | if i < rows - 1: 91 | A[r, (i + 1) * cols + j] = 1 92 | if j > 0: 93 | A[r, i * cols + j - 1] = 1 94 | if j < cols - 1: 95 | A[r, i * cols + j + 1] = 1 96 | b[r] = dx[i, j] - dx[i, j - 1] + dy[i, j] - dy[i - 1, j] 97 | else: 98 | A[r, i * cols + j] = 1 99 | b[r] = self.groundimg[i, j, dim] 100 | r = r + 1 101 | x = spsolve(A, b) 102 | im_res[:, :, dim] = np.clip(x.reshape(rows, cols), 0, 255) 103 | 104 | return im_res.astype('uint8') 105 | 106 | def imageFusion3(self): 107 | rows, cols, dims = self.topimg.shape 108 | 109 | im1 = self.topimg.copy().astype('float32') 110 | im2 = self.groundimg.copy().astype('float32') 111 | im_res = np.zeros((rows, cols, dims), dtype=np.float32) 112 | 113 | for dim in range(dims): 114 | [dx1, dy1] = genGradient(im1[:, :, dim]) 115 | [dx2, dy2] = genGradient(im2[:, :, dim]) 116 | 117 | dx, dy = dx2.copy(), dy2.copy() 118 | for i in range(rows): 119 | for j in range(cols): 120 | if self.msk[i, j, dim] != 0: 121 | if abs(dx[i, j]) + abs(dy[i, j]) < abs(dx1[i, j]) + abs(dy1[i, j]): 122 | dx[i, j], dy[i, j] = dx1[i, j], dy1[i, j] 123 | 124 | lap = genLaplacian(dx, dy) 125 | im_res[:, :, dim] = np.clip(solvePoisson(lap, im2[:, :, dim]), 0, 255) 126 | return im_res.astype('uint8') 127 | 128 | def imageFusion4(self): 129 | rows, cols, dims = self.topimg.shape 130 | 131 | im1 = self.topimg.copy().astype('float32') 132 | im_res = np.zeros((rows, cols, dims), dtype=np.float32) 133 | edge_map = genEdgeMap(self.topimg) 134 | 135 | for dim in range(dims): 136 | [dx, dy] = genGradient(im1[:, :, dim]) 137 | 138 | for i in range(rows): 139 | for j in range(cols): 140 | if self.msk[i, j, dim] != 0: 141 | dx[i, j] = dx[i, j] * edge_map[i, j] 142 | dy[i, j] = dy[i, j] * edge_map[i, j] 143 | 144 | lap = genLaplacian(dx, dy) 145 | im_res[:, :, dim] = np.clip(solvePoisson(lap, im1[:, :, dim]), 0, 255) 146 | return im_res.astype('uint8') 147 | 148 | def imageFusion5(self): 149 | rows, cols, dims = self.topimg.shape 150 | 151 | im1 = self.topimg.copy().astype('float32') 152 | im_res = np.zeros((rows, cols, dims), dtype=np.float32) 153 | edge_map = genEdgeMap(self.topimg) 154 | 155 | for dim in range(dims): 156 | [dx, dy] = genGradient(im1[:, :, dim]) 157 | d_sum, d_avg = 0, 0 158 | for i in range(rows): 159 | for j in range(cols): 160 | d_sum = d_sum + dx[i, j] + dy[i, j] 161 | d_avg = abs(d_sum * 0.2 / rows / cols) 162 | 163 | for i in range(rows): 164 | for j in range(cols): 165 | if self.msk[i, j, dim] != 0: 166 | if dx[i, j] != 0: 167 | dx[i, j] = dx[i, j] * (abs(dx[i, j]) ** -0.2) * (d_avg ** 0.2) 168 | if dy[i, j] != 0: 169 | dy[i, j] = dy[i, j] * (abs(dy[i, j]) ** -0.2) * (d_avg ** 0.2) 170 | 171 | lap = genLaplacian(dx, dy) 172 | im_res[:, :, dim] = np.clip(solvePoisson(lap, im1[:, :, dim]), 0, 255) 173 | return im_res.astype('uint8') -------------------------------------------------------------------------------- /src/postprocessing.py: -------------------------------------------------------------------------------- 1 | def genRes(img, patch, tl): 2 | _img = img.copy() 3 | for i in range(patch.shape[0]): 4 | for j in range(patch.shape[1]): 5 | for k in range(patch.shape[2]): 6 | _img[i + tl[0], j + tl[1], k] = patch[i, j, k] 7 | return _img 8 | 9 | def genNaiveRes(img, patch, mask, tl): 10 | _img = img.copy() 11 | for i in range(patch.shape[0]): 12 | for j in range(patch.shape[1]): 13 | for k in range(patch.shape[2]): 14 | if mask[i, j, k] != 0: 15 | _img[i + tl[0], j + tl[1], k] = patch[i, j, k] 16 | return _img -------------------------------------------------------------------------------- /src/preprocessing.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | 4 | # get the rectangle that contains the mask aera 5 | # of the top image 6 | # tl means top left 7 | # br means bottom right 8 | def getRect(img): 9 | tl = [1000000, 1000000] 10 | br = [-1, -1] 11 | for i in range(img.shape[0]): 12 | for j in range(img.shape[1]): 13 | if img[i, j, 0] != 0: 14 | if tl[0] > i - 2: 15 | tl[0] = max(0, i - 2) 16 | if tl[1] > j - 2: 17 | tl[1] = max(0, j - 2) 18 | if br[0] < i + 2: 19 | br[0] = min(img.shape[0], i + 2) 20 | if br[1] < j + 2: 21 | br[1] = min(img.shape[1], j + 2) 22 | return tl, br 23 | 24 | # add the offset to the top left and bottom right 25 | # to get the tl and br of the ground image 26 | def genDstRect(tl, br, offset): 27 | _tl = [0, 0] 28 | _br = [0, 0] 29 | _tl[0] = tl[0] + offset[0] 30 | _tl[1] = tl[1] + offset[1] 31 | _br[0] = br[0] + offset[0] 32 | _br[1] = br[1] + offset[1] 33 | return _tl, _br 34 | 35 | def genEdgeMap(img): 36 | raw_edge = cv2.Canny(img, 100, 200) 37 | edge_map = np.zeros(raw_edge.shape) 38 | for i in range(edge_map.shape[0]): 39 | for j in range(edge_map.shape[1]): 40 | if raw_edge[i][j] != 0: 41 | for delta_i in range(-2, 2): 42 | for delta_j in range(-2, 2): 43 | new_i = min(max(0, i + delta_i), edge_map.shape[0] - 1) 44 | new_j = min(max(0, j + delta_j), edge_map.shape[1] - 1) 45 | edge_map[new_i][new_j] = 1 46 | return edge_map -------------------------------------------------------------------------------- /src/test.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | from preprocessing import * 3 | 4 | img = cv2.imread("../images/test5_src.jpg") 5 | 6 | ''' 7 | print img.shape 8 | for i in range(img.shape[0]): 9 | for j in range(img.shape[1]): 10 | if 20 < i and i < 160 and 20 < j and j < 180: 11 | img[i, j] = [255, 255, 255] 12 | cv2.imwrite("../images/test3_mask2.jpg", img) 13 | ''' 14 | ''' 15 | print img.shape 16 | for i in range(img.shape[0]): 17 | for j in range(img.shape[1]): 18 | if 58 < j and j < 111 and 81 < i and i < 136: 19 | img[i, j] = [255, 255, 255] 20 | else: 21 | img[i, j] = [0, 0, 0] 22 | cv2.imwrite("../images/test5_mask.jpg", img) 23 | ''' 24 | print 4 ** 0.5 25 | #cv2.imshow("image", img) 26 | #cv2.waitKey(0) --------------------------------------------------------------------------------