├── README.md ├── Result ├── Result234.jpg └── stitching_result.jpg ├── data ├── S1.jpg ├── S2.jpg ├── S3.jpg └── S5.jpg └── src ├── img_stitch.cpp └── stitcher_class.cpp /README.md: -------------------------------------------------------------------------------- 1 | # Image-Stitching 2 | Stitch Images using OpenCV in C++ 3 | 4 | ##Project Description 5 | This repository uses OpenCV to creat a panaroma-like version of videos. 6 | 7 | The videos are captured from 3 video input cameras and are stitched frame by frame using the two methods discussed further. 8 | 9 | ## Algorithm 10 | I have used two ways to perform image stitching. 11 | 12 | One uses the predefined Stitcher class OpenCV Stitcher Class Documentation. 13 | You just have to input the frames as a vector of images to the function stitcher.stitch() and it burps out the output stitched image. 14 | 15 | Second method involves more steps which I will discuss here. 16 | 17 | First the frames from the 3 video input devices are captured. Then the pairwaise H matrix (homography matrix) is calculated. 18 | 19 | The H matrix is 3*3 matrix.Homography relates the pixel co-ordinates in the two images. 20 | 21 | When applied to every pixel the new image is a warped version of the original image 22 | 23 | 24 | Let us assmue the 3 input frames are I1 , I2 and I3 in the order from left to right.(This assmuption is possible in my case since the camera positions are fixed and I have written the code keeping in mind the which camera will be the leftmost.) 25 | 26 | Now, we calculate the H matrix for image pairs (I1, I2), let us call it H12, and (I2, I3), let's call it H23. Since I1 and I3 will not have a lot of areas in common there is no point in calculating H matrix for these image pairs. 27 | 28 | For example, the images in the data folder : 29 | 30 | Image 1: 31 | ![Alt text](https://github.com/Manasi94/Image-Stitching/blob/master/data/S1.jpg "Image1") 32 | 33 | Image 2: 34 | ![Alt text](https://github.com/Manasi94/Image-Stitching/blob/master/data/S2.jpg "Image2") 35 | 36 | Image 3: 37 | ![Alt text](https://github.com/Manasi94/Image-Stitching/blob/master/data/S3.jpg "Image3") 38 | 39 | Choose a image as reference image. Among the 3 images presented to us, the image I2 has the most in common with I1 and with I3 (since it is in the center). Hence we chose I2 as the reference image.This is also the reason why we do not compute H matrix for I1 and I3 image pairs, because we only compute the homography of images with the reference image, I2. 40 | 41 | Now using our H matrices we wrap the image pairs (I1,I2), now called I12, and (I2,I3), now called I23. 42 | 43 | Again calculate the H matrix for images I12 and I23 and wrap them up finally to get the ouptup I123. Tadaa! 44 | 45 | 46 | ##Implementation 47 | I have chosen C++ language. 48 | Calculation of H matrix : 49 | 50 | 51 | 1. Detecting the KeyPoints in an image using SURF. 52 | 53 | Speeded Up Robust Features. 54 | This detects the keypoints in the images. 55 | SURF is a local feature detector and descriptor. Introduction to SURF 56 | 57 | To improve the feature detection you can change the variable value *minHessian*, which is the **Hessian Threshold**. 58 | 59 | To increase the number of keypoints reduce the Hessian Threshold. 60 | 61 | 2. Calculating Descriptors : 62 | This computes and extracts the decriptors from the images using the keypoints from the previous step. 63 | 64 | 3. FLANN Matching : 65 | Fast Library for Approximate Nearest Neighbors finds the best matches for local image features [3]. 66 | 67 | 4. Filter, allowing only the good matches and find the correspoding keypoints. 68 | 69 | 5. Finds a perspective transformation between two planes using RANSAC 70 | 71 | RANdom SAmple Consensus (RANSAC) is a general parameter estimation approach designed to cope with a large 72 | proportion of outliers in the input data. 73 | 74 | It takes the final keypoints from the two images as input and the method of model estimation. You can also change the method from RANSAC to LMEDS. The advantages and disadvantages of each method are discussed in [5] 75 | 76 | 77 | 78 | ##Errors and How they were fixed 79 | 1. Major error was the device capture index numbers. You can check these indices using VLC or other methods and modify these numbers in the code. 80 | 81 | 2. The order of images in first method is no concern. In second method however the order is very important as it can give you crappy output result. 82 | 83 | 3. In the second method after stitching the two image pairs, the result also had a black region. This region was hampering the further H matrix calculations and hence had to be removed. At first, I observed the black region occupied half of the image space and hence I decided to simply cut the part. However once i shifted from images to video ,I realized part of my image was also being chopped off. Hence I decided to use contours to solve this problem. The contours were sorted in descending order of size and the largest contours was selected and updated as the image space. 84 | ![Alt text](https://github.com/Manasi94/Image-Stitching/blob/master/Result/result234.jpg "Error Result") 85 | 86 | ##Conclusions 87 | The second method works excellent with images.But it is however not producing excellent results with videos. The first method is giving decent results with videos, but is a little slow. 88 | ![Alt text](https://github.com/Manasi94/Image-Stitching/blob/master/Result/stitching_result.jpg "Result") 89 | 90 | ##Resources and References 91 | 1. Dr. Gerhard Roth notes on homography 92 | 2. Tutorial on 2D homographies at University of Toronto 93 | 3. FAST APPROXIMATE NEAREST NEIGHBORS WITH AUTOMATIC ALGORITHM CONFIGURATION 94 | 4. Overview of RANSAC Algorithm 95 | 5. Performance Evaluation of RANSAC Family 96 | 97 | ##How to run this code 98 | You need to change the name of input file and output file in CMakeLists.txt 99 | 100 | Go to this directory (Image_Stitching) in your console and type "make". 101 | 102 | After it succesfully links and builds the target, execute the output file in the console. Example: 103 | 104 | > ./img_stitch 105 | -------------------------------------------------------------------------------- /Result/Result234.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manasi94/Image-Stitching/f3fd302499a4dbecdc08478818eb753ea72ee67d/Result/Result234.jpg -------------------------------------------------------------------------------- /Result/stitching_result.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manasi94/Image-Stitching/f3fd302499a4dbecdc08478818eb753ea72ee67d/Result/stitching_result.jpg -------------------------------------------------------------------------------- /data/S1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manasi94/Image-Stitching/f3fd302499a4dbecdc08478818eb753ea72ee67d/data/S1.jpg -------------------------------------------------------------------------------- /data/S2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manasi94/Image-Stitching/f3fd302499a4dbecdc08478818eb753ea72ee67d/data/S2.jpg -------------------------------------------------------------------------------- /data/S3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manasi94/Image-Stitching/f3fd302499a4dbecdc08478818eb753ea72ee67d/data/S3.jpg -------------------------------------------------------------------------------- /data/S5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manasi94/Image-Stitching/f3fd302499a4dbecdc08478818eb753ea72ee67d/data/S5.jpg -------------------------------------------------------------------------------- /src/img_stitch.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | 4 | #include "opencv2/core/core.hpp" 5 | #include "opencv2/features2d/features2d.hpp" 6 | #include "opencv2/highgui/highgui.hpp" 7 | #include "opencv2/nonfree/nonfree.hpp" 8 | #include "opencv2/calib3d/calib3d.hpp" 9 | #include "opencv2/imgproc/imgproc.hpp" 10 | #include 11 | using namespace cv; 12 | 13 | void readme() 14 | { 15 | printf("This project takes input from 3 video streams and stitches the videos, frame by frame."); 16 | } 17 | 18 | 19 | Mat calculate_h_matrix(Mat image1, Mat image2, Mat gray_image1, Mat gray_image2) 20 | { 21 | 22 | 23 | //-- Step 1: Detect the keypoints using SURF Detector 24 | int minHessian = 200; 25 | SurfFeatureDetector detector( minHessian ); 26 | std::vector< KeyPoint > keypoints_object, keypoints_scene; 27 | detector.detect( gray_image1, keypoints_object ); 28 | detector.detect( gray_image2, keypoints_scene ); 29 | 30 | //-- Step 2: Calculate descriptors (feature vectors) 31 | SurfDescriptorExtractor extractor; 32 | Mat descriptors_object, descriptors_scene; 33 | extractor.compute( gray_image1, keypoints_object, descriptors_object ); 34 | extractor.compute( gray_image2, keypoints_scene, descriptors_scene ); 35 | 36 | //-- Step 3: Matching descriptor vectors using FLANN matcher 37 | 38 | FlannBasedMatcher matcher; 39 | std::vector< DMatch > matches; 40 | matcher.match( descriptors_object, descriptors_scene, matches ); 41 | matcher.match( descriptors_object, descriptors_scene, matches ); 42 | 43 | 44 | double max_dist = 0; double min_dist = 100; 45 | 46 | //-- Quick calculation of max and min distances between keypoints 47 | for( int i = 0; i < descriptors_object.rows; i++ ) 48 | { 49 | double dist = matches[i].distance; 50 | if( dist < min_dist ) min_dist = dist; 51 | if( dist > max_dist ) max_dist = dist; 52 | } 53 | 54 | printf("-- Max dist: %f \n", max_dist ); 55 | printf("-- Min dist: %f \n", min_dist ); 56 | 57 | 58 | //-- Use only "good" matches (i.e. whose distance is less than 3*min_dist ) 59 | std::vector< DMatch > good_matches; 60 | cv::Mat result; 61 | // cv::Mat result23; 62 | cv::Mat H; 63 | // cv::Mat H23; 64 | for( int i = 0; i < descriptors_object.rows; i++ ) 65 | { 66 | if( matches[i].distance < 3*min_dist ) 67 | { good_matches.push_back( matches[i]); } 68 | } 69 | std::vector< Point2f > obj; 70 | std::vector< Point2f > scene; 71 | 72 | for( int i = 0; i < good_matches.size(); i++ ) 73 | { 74 | //-- Get the keypoints from the good matches 75 | obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt ); 76 | scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt ); 77 | } 78 | 79 | 80 | // Find the Homography Matrix for img 1 and img2 81 | H = findHomography( obj, scene, CV_RANSAC ); 82 | return H; 83 | } 84 | 85 | 86 | Mat stitch_image(Mat image1, Mat image2, Mat H) 87 | { 88 | 89 | cv::Mat result; 90 | // cv::Mat result23; 91 | warpPerspective(image1,result,H,cv::Size(image1.cols+image2.cols,image1.rows)); 92 | cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows)); 93 | image2.copyTo(half); 94 | 95 | 96 | // cv::resize(result,result, Size(image1.cols,image1.rows),INTER_LINEAR); 97 | 98 | // cv::imshow("Result", result); 99 | 100 | // Mat ycrcb; 101 | 102 | // cvtColor(result,ycrcb,CV_BGR2YCrCb); 103 | 104 | // vector channels; 105 | // split(ycrcb,channels); 106 | 107 | // equalizeHist(channels[0], channels[0]); 108 | 109 | // Mat dst; 110 | // merge(channels,ycrcb); 111 | 112 | // cvtColor(ycrcb,dst,CV_YCrCb2BGR); 113 | 114 | // cv::imshow("Hist_Equalized_Result", dst); 115 | // cv::resize(dst,dst,image1.size()); 116 | // cv::imwrite("./Result/Result.jpg", dst); 117 | // // cv::imwrite("./data/cam_left.jpg", image1); 118 | // // cv::imwrite("./data/cam_right.jpg", image2); 119 | // waitKey(0); 120 | return result; 121 | } 122 | 123 | 124 | // Mat hist_equalization() 125 | // { 126 | // cvtColor(img2, img_gray2, COLOR_BGR2GRAY); 127 | // Mat ycrcb; 128 | // cvtColor(result,ycrcb,CV_BGR2YCrCb); 129 | // vector channels; 130 | // split(ycrcb,channels); 131 | // equalizeHist(channels[0], channels[0]); 132 | // Mat dst; 133 | // merge(channels,ycrcb); 134 | // cvtColor(ycrcb,img,CV_YCrCb2BGR); 135 | 136 | // } 137 | 138 | int main( int argc, char** argv ) 139 | { 140 | readme(); 141 | // if( argc != 4 ) 142 | // { readme(); return -1; } 143 | Mat gray_image1; 144 | Mat gray_image2; 145 | Mat gray_image3; 146 | Mat gray_image4; 147 | Mat img, img2; 148 | Mat gray_img; 149 | Mat result; 150 | Mat img_gray, img_gray2, img_gray3, img_gray4,img_gray5; 151 | Mat img3, img4, img5, img6; 152 | 153 | // Load the images 154 | VideoCapture cap1(1); 155 | VideoCapture cap2(2); 156 | VideoCapture cap3(3); 157 | 158 | 159 | for(;;) 160 | { 161 | 162 | Mat image1; 163 | cap1 >> image1; // get a new frame from camera 164 | cvtColor(image1, gray_image1, COLOR_BGR2GRAY); 165 | 166 | Mat image2; 167 | cap2 >> image2; // get a new frame from camera 168 | cvtColor(image2, gray_image2, COLOR_BGR2GRAY); 169 | 170 | Mat image3; 171 | cap3 >> image3; // get a new frame from camera 172 | cvtColor(image3, gray_image3, COLOR_BGR2GRAY); 173 | 174 | imshow("first image",image1); 175 | cv::imwrite("./data/Image1.jpg", image1); 176 | 177 | imshow("second image",image2); 178 | cv::imwrite("./data/Image2.jpg", image2); 179 | 180 | imshow("third image",image3); 181 | cv::imwrite("./data/Image3.jpg", image3); 182 | 183 | if( !gray_image1.data || !gray_image2.data ) 184 | { std::cout<< " --(!) Error reading images " << std::endl; return -1; } 185 | 186 | 187 | 188 | 189 | // Mat image1 = imread("../Image_stitching/data/S1.jpg"); 190 | // cvtColor(image1, gray_image1, COLOR_BGR2GRAY); 191 | 192 | // Mat image2 =imread("../Image_stitching/data/S2.jpg"); 193 | // cvtColor(image2, gray_image2, COLOR_BGR2GRAY); 194 | 195 | // Mat image3 = imread("../Image_stitching/data/S3.jpg"); 196 | // cvtColor(image3, gray_image3, COLOR_BGR2GRAY); 197 | 198 | // Mat image4 = imread("../Image_stitching/data/S5.jpg"); 199 | // cvtColor(image4, gray_image4, COLOR_BGR2GRAY); 200 | 201 | // imshow("first image",image1); 202 | // imshow("second image",image2); 203 | // imshow("third image",image3); 204 | // imshow("fourth image", image4); 205 | 206 | // if( !gray_image1.data || !gray_image2.data ) 207 | // { std::cout<< " --(!) Error reading images " << std::endl; return -1; } 208 | 209 | 210 | 211 | 212 | Mat H12 = calculate_h_matrix(image2,image1, gray_image2, gray_image1); 213 | // Mat H13 = calculate_h_matrix(image1,image3, gray_image1, gray_image3); 214 | Mat H23 = calculate_h_matrix(image3,image2, gray_image3, gray_image2); 215 | // Mat H34 = calculate_h_matrix(image3,image4, gray_image3, gray_image4); 216 | 217 | 218 | 219 | /*The main logic is to chose a central image which is common to the 3 images. 220 | In this code, there are 3 images, in the order: Image1, Image2 and Image 3 respectively. 221 | The Image2 is thus comman to all three image and thus we chose this as the central image 222 | and calculate the homography matrices of other images with respect to this image*/ 223 | 224 | //Stitch Image 2 and Image 3 and saved in img 225 | img = stitch_image(image3,image2,H23); 226 | cvtColor(img, img_gray, COLOR_BGR2GRAY); 227 | 228 | // //Finding the largest contour i.e remove the black region from image 229 | threshold(img_gray, img_gray,25, 255,THRESH_BINARY); //Threshold the gray 230 | vector > contours; // Vector for storing contour 231 | vector hierarchy; 232 | findContours( img_gray, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image 233 | int largest_area = 0; 234 | int largest_contour_index = 0; 235 | Rect bounding_rect; 236 | 237 | for( int i = 0; i< contours.size(); i++ ) // iterate through each contour. 238 | { 239 | double a=contourArea( contours[i],false); // Find the area of contour 240 | if(a>largest_area){ 241 | largest_area=a; 242 | largest_contour_index=i; //Store the index of largest contour 243 | bounding_rect=boundingRect(contours[i]); // Find the bounding rectangle for biggest contour 244 | } 245 | 246 | } 247 | 248 | // Scalar color( 255,255,255); 249 | img = img(Rect(bounding_rect.x, bounding_rect.y, bounding_rect.width, bounding_rect.height)); 250 | 251 | 252 | 253 | 254 | 255 | //Stitch Image 1 and Image 2 and saved in img2 256 | img2 = stitch_image(image2, image1, H12); 257 | cvtColor(img2, img_gray2, COLOR_BGR2GRAY); 258 | 259 | //Finding the largest contour i.e remove the black region from image 260 | threshold(img_gray2, img_gray2,25, 255,THRESH_BINARY); //Threshold the gray 261 | vector > contours2; // Vector for storing contour 262 | vector hierarchy2; 263 | findContours( img_gray2, contours2, hierarchy2,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image 264 | int largest_area2 = 0; 265 | int largest_contour_index2 = 0; 266 | Rect bounding_rect2; 267 | 268 | for( int i = 0; i< contours2.size(); i++ ) // iterate through each contour. 269 | { 270 | double a=contourArea( contours2[i],false); // Find the area of contour 271 | if(a>largest_area2){ 272 | largest_area2=a; 273 | largest_contour_index2=i; //Store the index of largest contour 274 | bounding_rect2=boundingRect(contours2[i]); // Find the bounding rectangle for biggest contour 275 | } 276 | 277 | } 278 | 279 | img2 = img2(Rect(bounding_rect2.x, bounding_rect2.y, bounding_rect2.width, bounding_rect2.height)); 280 | 281 | // //Stitch Image 3 and Image 4 and saved in img3 282 | // img3 = stitch_image(image3, image4, H34); 283 | // img3 = img3(Rect(0,0,(img3.cols/2),(img3.rows))); 284 | // cvtColor(img3, img_gray3, COLOR_BGR2GRAY); 285 | 286 | 287 | //Show img 288 | cv::imshow("Hist_Equalized_Result of Image 2 and Image 3", img); 289 | cv::imwrite("./Result/Result23.jpg", img); 290 | // waitKey(0); 291 | 292 | //Show img2 293 | cv::imshow("Hist_Equalized_Result of Image 1 and Image 2", img2); 294 | cv::imwrite("./Result/Result12.jpg", img2); 295 | // waitKey(0); 296 | 297 | // //Show img3 298 | // // cv::resize(img2,img2,image1.size()); 299 | // cv::imshow("Hist_Equalized_Result of Image 3 and Image 4", img3); 300 | // cv::imwrite("./Result/Result34.jpg", img3); 301 | // waitKey(0); 302 | 303 | 304 | // Stitch (Image 1 and Image 2) and (Image 2 and Image 3) 305 | Mat H123 = calculate_h_matrix(img,img2, img_gray, img_gray2); 306 | img4 = stitch_image(img,img2, H123); 307 | // img4 = img4(Rect(0,0,(img4.cols*7/10),(img4.rows))); 308 | // cvtColor(img4, img_gray4, COLOR_BGR2GRAY); 309 | 310 | // //Stitch (Image 2 and Image 3) and (Image 3 and Image 4) 311 | // Mat H234 = calculate_h_matrix(img3,img, img_gray3, img_gray); 312 | // img5 = stitch_image(img3,img, H234); 313 | // img5 = img5(Rect(0,0,(img5.cols*3/4),(img5.rows))); 314 | // cvtColor(img5, img_gray5, COLOR_BGR2GRAY); 315 | 316 | // //Stitch (Image 1 and Image 2 and Image 3) and (Image 2 and Image 3 and Image 4) 317 | // Mat H1234 = calculate_h_matrix(img5,img4, img_gray5, img_gray4); 318 | // img6 = stitch_image(img5,img4, H1234); 319 | 320 | cv::imshow("Hist_Equalized_Result of Image 1 and Image 2 and Image 3", img4); 321 | cv::imwrite("./Result/Result123.jpg", img4); 322 | // waitKey(0); 323 | 324 | // cv::imshow("Hist_Equalized_Result of Image 2 and Image 3 and Image 4" , img5); 325 | // cv::imwrite("./Result/Result234.jpg", img5); 326 | // waitKey(0); 327 | 328 | // cv::imshow("Hist_Equalized_Result of Image 1 and Image 2 and Image 3 and Image 4" , img6); 329 | // cv::imwrite("./Result/Result1234.jpg", img6); 330 | // waitKey(0); 331 | if(waitKey(30) >= 0) break; 332 | else usleep(2000000); 333 | 334 | } 335 | 336 | } -------------------------------------------------------------------------------- /src/stitcher_class.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "opencv2/opencv.hpp" 3 | #include "opencv2/stitching/stitcher.hpp" 4 | #include 5 | #include 6 | #include 7 | 8 | #ifdef _DEBUG 9 | #pragma comment(lib, "opencv_core246d.lib") 10 | #pragma comment(lib, "opencv_imgproc246d.lib") //MAT processing 11 | #pragma comment(lib, "opencv_highgui246d.lib") 12 | #pragma comment(lib, "opencv_stitching246d.lib"); 13 | 14 | #else 15 | #pragma comment(lib, "opencv_core246.lib") 16 | #pragma comment(lib, "opencv_imgproc246.lib") 17 | #pragma comment(lib, "opencv_highgui246.lib") 18 | #pragma comment(lib, "opencv_stitching246.lib"); 19 | #endif 20 | 21 | using namespace cv; 22 | using namespace std; 23 | 24 | 25 | int main() 26 | { 27 | 28 | // Load the images 29 | VideoCapture cap1(1); 30 | VideoCapture cap2(2); 31 | VideoCapture cap3(3); 32 | 33 | for(;;) 34 | { 35 | Mat image1; 36 | cap1 >> image1; // get a new frame from camera 37 | 38 | Mat image2; 39 | cap2 >> image2; // get a new frame from camera 40 | 41 | Mat image3; 42 | cap3 >> image3; // get a new frame from camera 43 | 44 | imshow("first image",image1); 45 | cv::imwrite("./data/Image1.jpg", image1); 46 | 47 | imshow("second image",image2); 48 | cv::imwrite("./data/Image2.jpg", image2); 49 | 50 | imshow("third image",image3); 51 | cv::imwrite("./data/Image3.jpg", image3); 52 | 53 | if( !image1.data || !image2.data || !image3.data ) 54 | { std::cout<< " --(!) Error reading images " << std::endl; return -1; } 55 | 56 | 57 | vector< Mat > vImg; 58 | Mat rImg; 59 | 60 | vImg.push_back( image1 ); 61 | vImg.push_back( image2 ); 62 | vImg.push_back( image3 ); 63 | 64 | 65 | Stitcher stitcher = Stitcher::createDefault(); 66 | 67 | 68 | unsigned long AAtime=0, BBtime=0; //check processing time 69 | AAtime = getTickCount(); //check processing time 70 | 71 | Stitcher::Status status = stitcher.stitch(vImg, rImg); 72 | 73 | BBtime = getTickCount(); //check processing time 74 | printf("%.2lf sec \n", (BBtime - AAtime)/getTickFrequency() ); //check processing time 75 | 76 | if (Stitcher::OK == status) 77 | { 78 | imshow("Result",rImg); 79 | cv::imwrite("./Result/stitching_result.jpg", rImg); 80 | } 81 | else 82 | printf("Stitching fail."); 83 | 84 | if(waitKey(30) >= 0) break; 85 | else usleep(2000000); 86 | // waitKey(0); 87 | 88 | } 89 | } --------------------------------------------------------------------------------