├── .idea ├── Image-Stitching.iml ├── inspectionProfiles │ └── profiles_settings.xml ├── misc.xml ├── modules.xml ├── vcs.xml └── workspace.xml ├── .ipynb_checkpoints └── Untitled-checkpoint.ipynb ├── BrightnessNormalization_flann_4.JPG ├── HistogramEqualization_flann_4.JPG ├── README.md ├── __pycache__ └── utils.cpython-37.pyc ├── brute_1.JPG ├── brute_2.JPG ├── brute_3.JPG ├── btute_4.JPG ├── flann_1.JPG ├── flann_2.JPG ├── flann_3.JPG ├── flann_4.JPG ├── images ├── 1-1.jpg ├── 1-2.jpg ├── 2-1.jpg ├── 2-2.jpg ├── 3-1.JPG ├── 3-2.JPG ├── 4-1.jpg └── 4-2.jpg ├── main.py └── utils.py /.idea/Image-Stitching.iml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /.idea/inspectionProfiles/profiles_settings.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 6 | -------------------------------------------------------------------------------- /.idea/misc.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | -------------------------------------------------------------------------------- /.idea/modules.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /.idea/vcs.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | -------------------------------------------------------------------------------- /.idea/workspace.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 13 | 14 | 16 | 17 | 18 | 19 | 20 | 21 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 1573447616462 39 | 43 | 44 | 45 | 46 | 55 | 56 | -------------------------------------------------------------------------------- /.ipynb_checkpoints/Untitled-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [], 3 | "metadata": {}, 4 | "nbformat": 4, 5 | "nbformat_minor": 2 6 | } 7 | -------------------------------------------------------------------------------- /BrightnessNormalization_flann_4.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/BrightnessNormalization_flann_4.JPG -------------------------------------------------------------------------------- /HistogramEqualization_flann_4.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/HistogramEqualization_flann_4.JPG -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Image-Stitching 2 | 3 | ## 项目内容 4 | 使用opencv中的基本工具,以sift算法为核心,对两张包含部分相同内容的图片进行基于特征提取的图像拼接。 5 | 6 | ## 实现过程 7 | 第一步:对输入图象进行图像增强(目前测试了直方图均衡化、亮度匹配) 8 |
第二步:使用SIFT算法对两张图片进行特征提取 9 |
第三步:对两张图上的特征点进行匹配(目前测试了暴力匹配、K-最近邻匹配) 10 |
第四步:对匹配好的特征点对,进行筛选,尽量剔除误匹配,根据匹配效果最好的四个点计算投影变换使用的单应性矩阵(目前测试了RANSAC算法剔除误匹配) 11 |
第五步:根据第四步返回的单应性矩阵进行图片拼接 12 | 13 | ## 结果介绍 14 | 根目录中的图片即为不同算法输出的结果,命名方式为:图像增强方法__特征匹配方法__拼接图片组编号 15 | 16 | ## 实现细节 17 |
第一步:图象增强虽然放在开头,确是其他部分调整完之后最后进行的内容,在得到base line之后根据结果缺陷有针对性地进行图象增强 18 |
第二步:在充分理解SIFT算法之后,可以考虑换用SURF等其他特征提取算法 19 |
第三步:依照目前的结果来看,虽然两种匹配方式选择的特征点对有所区别,不过最终拼接的效果相差很小 20 |
第四步:最省心的一步 21 |
第五步:本来考虑了两种拼接方法,一种是直接覆盖式,另一种是加权融合式。后来经过实验证明,最理想条件下,两种方式均能做到完美拼接,但是随着难度的提升,加权融合式拼接会因为拼接错位以及权值无法精确计算而产生虚影,而直接覆盖式即使存在误差也能比较清晰地看到大部分图片信息,最终决定只保留了直接覆盖式拼接。 22 | 23 | ## 遇到的问题及困难 24 | 第一步:目前测试的两种增强方式都是为了实现亮度归一化,避免在特征匹配良好的情况下,因为图片亮度而导致的不自然。很遗憾目前的两种方法都不理想。亮度匹配是根据两张图片的亮度均值之差对其中一张图片的亮度进行整体平移,但是测试结果显示跟不增强的结果几乎没什么区别。直方图均衡化在图片高亮部分表现不错,但是对低亮度部分几乎不起作用,并且在图片分辨率不高的情况下会严重损害图像清晰度,产生马赛克一样的色块。 25 | 26 | 第二步:SIFT算法提取特征时,对图片的亮度十分敏感,如果亮度差别比较明显的话,即使特征很明显也很容易产生大量误匹配最终导致匹配失败。这也是最后想到引入图像增强进行亮度归一化的原因,不过目前并没有得到很好的解决。 27 | 28 | 第三、四步:基本都是调取API所以难度不大 29 | 30 | 第五步:由于opencv的图片拼接API封装的过于高级,没有基本工具包可用,所以全部代码都是由手写实现。其中最关键的是对单应性矩阵的理解,如果不能应用单应性矩阵计算出变换后的画布最大尺寸,就有可能产生图片变换后超出画布范围的情况。 31 | 32 | ## 项目成员 33 | Winter : https://github.com/zysc1996 34 |
JocelynWang2333 : https://github.com/JocelynWang2333 35 |
Sunmile : https://github.com/Sunmilelj 36 |
Rongxin : https://github.com/dengrongxin 37 | 38 | ## 后续工作 39 | 这个项目可尝试的内容还有很多,会在将来不定期更新。 40 | 41 | 42 | 43 | 44 | -------------------------------------------------------------------------------- /__pycache__/utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/__pycache__/utils.cpython-37.pyc -------------------------------------------------------------------------------- /brute_1.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/brute_1.JPG -------------------------------------------------------------------------------- /brute_2.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/brute_2.JPG -------------------------------------------------------------------------------- /brute_3.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/brute_3.JPG -------------------------------------------------------------------------------- /btute_4.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/btute_4.JPG -------------------------------------------------------------------------------- /flann_1.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/flann_1.JPG -------------------------------------------------------------------------------- /flann_2.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/flann_2.JPG -------------------------------------------------------------------------------- /flann_3.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/flann_3.JPG -------------------------------------------------------------------------------- /flann_4.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/flann_4.JPG -------------------------------------------------------------------------------- /images/1-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/images/1-1.jpg -------------------------------------------------------------------------------- /images/1-2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/images/1-2.jpg -------------------------------------------------------------------------------- /images/2-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/images/2-1.jpg -------------------------------------------------------------------------------- /images/2-2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/images/2-2.jpg -------------------------------------------------------------------------------- /images/3-1.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/images/3-1.JPG -------------------------------------------------------------------------------- /images/3-2.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/images/3-2.JPG -------------------------------------------------------------------------------- /images/4-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/images/4-1.jpg -------------------------------------------------------------------------------- /images/4-2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zysc1996/Image-Stitching/6f2de2b223370cecaf9662914463cb8db7fa1bc8/images/4-2.jpg -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import utils 3 | 4 | img = [0, 0] 5 | print("Please input the two images' locations and finish with 'Enter'") 6 | img[0] = '5-1.jpg' 7 | img[1] = '5-2.jpg' 8 | 9 | if img[0] and img[1]: 10 | image1 = cv2.imread(img[0]) 11 | image2 = cv2.imread(img[1]) 12 | ImgEnhance = utils.DataEnhance() 13 | # image1, image2 = ImgEnhance.BrightnessNormalization(image1,image2) 14 | stitch_match = utils.FindKeyPointsAndMatching() 15 | kp1, kp2 = stitch_match.get_key_points(img1=image1, img2=image2) 16 | homo_matrix = stitch_match.match(kp1, kp2, 'brute') 17 | stitch_merge = utils.PasteTwoImages() 18 | merge_image = stitch_merge(image1, image2, homo_matrix) 19 | cv2.imwrite('2-output.JPG', merge_image) 20 | # cv2.imshow('output.JPG', merge_image) 21 | # if cv2.waitKey() == 27: 22 | # cv2.destroyAllWindows() 23 | 24 | print('\n=======>Output saved!') 25 | else: 26 | print('Please input images locations in right way.') 27 | -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | 4 | class DataEnhance: 5 | def __init__(self): 6 | pass 7 | 8 | def BrightnessNormalization(self, img1, img2): 9 | img1gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) 10 | img2gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) 11 | img1_brt = np.mean(img1gray) 12 | img2_brt = np.mean(img2gray) 13 | bias = img1_brt - img2_brt 14 | for i in range(img1gray.shape[0]): 15 | for j in range(img1gray.shape[1]): 16 | img1gray[i][j] -= bias 17 | if img1gray[i][j] < 0: 18 | img1gray[i][j] = 0 19 | elif img1gray[i][j] > 255: 20 | img1gray[i][j] = 255 21 | img1gray = cv2.cvtColor(img1, cv2.COLOR_BGR2BGRA) 22 | img2gray = cv2.cvtColor(img2, cv2.COLOR_BGR2BGRA) 23 | return img1gray, img2gray 24 | 25 | def HistogramEqualization(self, img): 26 | (b, g, r) = cv2.split(img) 27 | bH = cv2.equalizeHist(b) 28 | gH = cv2.equalizeHist(g) 29 | rH = cv2.equalizeHist(r) 30 | result = cv2.merge((bH, gH, rH)) 31 | return result 32 | 33 | class FindKeyPointsAndMatching: 34 | def __init__(self): 35 | self.sift = cv2.xfeatures2d.SIFT_create() 36 | self.brute = cv2.BFMatcher() 37 | self.FLANN_INDEX_KDTREE = 1 38 | self.index_params = dict(algorithm=self.FLANN_INDEX_KDTREE, trees=5) 39 | self.search_params = dict(checks=50) 40 | self.flann = cv2.FlannBasedMatcher(self.index_params, self.search_params) 41 | 42 | def get_key_points(self, img1, img2): 43 | g_img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) 44 | g_img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) 45 | kp1, kp2 = {}, {} 46 | print('=======>Detecting key points!') 47 | kp1['kp'], kp1['des'] = self.sift.detectAndCompute(g_img1, None) 48 | kp2['kp'], kp2['des'] = self.sift.detectAndCompute(g_img2, None) 49 | return kp1, kp2 50 | 51 | def match(self, kp1, kp2, MatchMethod = 'brute'): 52 | print('=======>Matching key points!') 53 | if MatchMethod == 'brute': 54 | matches = self.brute.knnMatch(kp1['des'], kp2['des'], k=2) 55 | elif MatchMethod == 'flann': 56 | matches = self.flann.knnMatch(kp1['des'], kp2['des'], k=2) 57 | good_matches = [] 58 | for i, (m, n) in enumerate(matches): 59 | if m.distance < 0.7 * n.distance: 60 | good_matches.append((m.trainIdx, m.queryIdx)) 61 | if len(good_matches) > 4: 62 | key_points1 = kp1['kp'] 63 | key_points2 = kp2['kp'] 64 | matched_kp1 = np.float32( 65 | [key_points1[i].pt for (_, i) in good_matches] 66 | ) 67 | matched_kp2 = np.float32( 68 | [key_points2[i].pt for (i, _) in good_matches] 69 | ) 70 | 71 | print('=======>Random sampling and computing the homography matrix!') 72 | homo_matrix, _ = cv2.findHomography(matched_kp1, matched_kp2, cv2.RANSAC, 4) 73 | return homo_matrix 74 | else: 75 | return None 76 | 77 | 78 | class PasteTwoImages: 79 | def __init__(self): 80 | pass 81 | 82 | def __call__(self, img1, img2, homo_matrix): 83 | h1, w1 = img1.shape[0], img1.shape[1] 84 | h2, w2 = img2.shape[0], img2.shape[1] 85 | rect1 = np.array([[0, 0], [0, h1], [w1, h1], [w1, 0]], dtype=np.float32).reshape((4, 1, 2)) 86 | rect2 = np.array([[0, 0], [0, h2], [w2, h2], [w2, 0]], dtype=np.float32).reshape((4, 1, 2)) 87 | trans_rect1 = cv2.perspectiveTransform(rect1, homo_matrix) 88 | total_rect = np.concatenate((rect2, trans_rect1), axis=0) 89 | min_x, min_y = np.int32(total_rect.min(axis=0).ravel()) 90 | max_x, max_y = np.int32(total_rect.max(axis=0).ravel()) 91 | shift_to_zero_matrix = np.array([[1, 0, -min_x], [0, 1, -min_y], [0, 0, 1]]) 92 | trans_img1 = cv2.warpPerspective(img1, shift_to_zero_matrix.dot(homo_matrix), (max_x - min_x, max_y - min_y)) 93 | trans_img1[-min_y:h2 - min_y, -min_x:w2 - min_x] = img2 94 | return trans_img1 95 | --------------------------------------------------------------------------------