├── README.md ├── VehicheDetect.ipynb ├── camera_cal ├── calibration.p ├── calibration1.jpg ├── calibration10.jpg ├── calibration11.jpg ├── calibration12.jpg ├── calibration13.jpg ├── calibration14.jpg ├── calibration15.jpg ├── calibration16.jpg ├── calibration17.jpg ├── calibration18.jpg ├── calibration19.jpg ├── calibration2.jpg ├── calibration20.jpg ├── calibration3.jpg ├── calibration4.jpg ├── calibration5.jpg ├── calibration6.jpg ├── calibration7.jpg ├── calibration8.jpg └── calibration9.jpg ├── laneline.py ├── output_images ├── .DS_Store ├── save_output_here.txt ├── test1.jpg ├── test2.jpg ├── test3.jpg ├── test4.jpg ├── test5.jpg └── test6.jpg ├── project_video.mp4 ├── project_video_proc.mp4 ├── readme_img ├── car_track_window.jpg ├── ex.jpg ├── heatmap.jpg ├── hog.jpg ├── image_fin.jpg ├── image_proc.jpg ├── new_car_windows.jpg └── title.gif └── test_images ├── .DS_Store ├── test1.jpg ├── test2.jpg ├── test3.jpg ├── test4.jpg ├── test5.jpg └── test6.jpg /README.md: -------------------------------------------------------------------------------- 1 | # Vehicle-Detection-and-Tracking 2 | ### Udacity Self-Driving Car Engineer Nanodegree. Project: Vehicle Detection and Tracking 3 | 4 | This Project is the fifth task of the Udacity Self-Driving Car Nanodegree program. The main goal of the project is to create a software pipeline to identify vehicles in a video from a front-facing camera on a car. Additionally, an [Advanced Lane Line](https://github.com/NikolasEnt/Advanced-Lane-Lines) finding algorithm was added from the fourth task of the Nanodegree program. 5 | 6 | ![Title .gif animation](readme_img/title.gif) 7 | 8 | **Result:** [video](https://youtu.be/waYJjmkRZfw) 9 | 10 | For details on this and other projects, see my [website](https://nikolasent.github.io/proj/proj2) 11 | 12 | ## Content of this repo 13 | 14 | - `VehicheDetect.ipynb` - Jupyter notebook with code for the project 15 | - `laneline.py` - python program for lane line detection from the [project 4](https://github.com/NikolasEnt/Advanced-Lane-Lines). 16 | - `test_images` - a directory with test images 17 | - `camera_cal` - a directory with camera calibration images from the [project 4](https://github.com/NikolasEnt/Advanced-Lane-Lines). 18 | - `project_video_proc.mp4` - the result video 19 | - `project_video.mp4` - the original raw video from [Udacity](https://github.com/udacity/CarND-Vehicle-Detection) 20 | 21 | **Note:** The repository does not contain any training images. You have to download and unzip the image datasets of vehicles and non-vehicles provided by Udacity and place them in appropriate directories on your own. 22 | Please, see links under "Data loading" header in the [VehicheDetect.ipynb](./VehicheDetect.ipynb) notebook. 23 | 24 | ## Classifier 25 | 26 | Features are needed to train a classifier and make predictions on the test or real-world images. 27 | 28 | The project required to build a classifier that is able to answer if there is a car in a given image (subset of the whole image). To address this task three types of features were used: HOG (Histogram of Oriented Gradients) (shape features), binned color (color and shape features) and color histogram features (color only features). This combination of features can provide enough information for image classification. 29 | 30 | Firstly, an automated approach was applied to tune the HOG parameters (`orientations, pixels_per_cell, cells_per_block`). 31 | 32 | Something like: 33 | ```Python 34 | from skopt import gp_minimize 35 | space = [(8, 64), # nbins 36 | (6, 12), # orient 37 | (4, 16), # pix_per_cell 38 | (1, 2)] # cell_per_block 39 | i = 0 40 | def obj(params): 41 | global i 42 | nbins, orient, pix_per_cell, cell_per_block = params 43 | car_features = extract_features(cars[0:len(cars):10], nbins, orient, pix_per_cell, cell_per_block) 44 | notcar_features = extract_features(notcars[0:len(notcars):10], nbins, orient, pix_per_cell, cell_per_block) 45 | y = np.hstack((np.ones(len(cars[0:len(cars):10])), np.zeros(len(notcars[0:len(notcars):10])))) 46 | X = np.vstack((car_features, notcar_features)).astype(np.float64) 47 | # Fit a per-column scaler 48 | X_scaler = StandardScaler().fit(X) 49 | # Apply the scaler to X 50 | scaled_X = X_scaler.transform(X) 51 | X_train, X_test, y_train, y_test = train_test_split(scaled_X, y, test_size=0.2, random_state=22) 52 | svc = LinearSVC() 53 | svc.fit(X_train, y_train) 54 | test_acc = svc.score(X_test, y_test) 55 | print i, params, test_acc 56 | i+=1 57 | return 1.0-test_acc 58 | 59 | res = gp_minimize(obj, space, n_calls=20, random_state=22) 60 | "Best score=%.4f" % res.fun 61 | ``` 62 | 63 | However, results were not very good because it ended with high numbers for HOG parameters which results in very slow feature extraction with comparable to less computational-expensive parameters set accuracy. That is why, the parameters for HOG as well as parameters for other features extractors were finetuned manually by try and error process so that it optimizes accuracy and computation time. 64 | 65 | Here is an example of a train image and its HOG: 66 | 67 | ![Example image](readme_img/ex.jpg) ![HOG of example image](readme_img/hog.jpg) 68 | 69 | Final parameter for feature extraction: 70 | 71 | ```Python 72 | color_space = 'LUV' # Can be RGB, HSV, LUV, HLS, YUV, YCrCb 73 | orient = 8 # HOG orientations 74 | pix_per_cell = 8 # HOG pixels per cell 75 | cell_per_block = 2 # HOG cells per block 76 | hog_channel = 0 # Can be 0, 1, 2, or "ALL" 77 | spatial_size = (16, 16) # Spatial binning dimensions 78 | hist_bins = 32 # Number of histogram bins 79 | ``` 80 | 81 | Normalizing ensures that a classifier's behavior isn't dominated by just a subset of the features, and that the training process is as efficient as possible. That is why, feature list was normolized by the `StandardScaler()` method from `sklearn`. The data is splitted into thaining and testing subsets (80% and 20%). The classifier is a linear SVM. It was found that it performs well enough and quite fast for the task. The code chunk under *Classifier* represents these operations. 82 | 83 | 84 | ## Sliding Window and the classifier testing 85 | 86 | Basic sliding window algoritm was implemented in the same way to one presented in Udacity's lectures (See the code chunks under *Slide window* header). It allows to search a car in a desired region of the frame with a desired window size (each subsamled window is rescaled to 64x64 px before classifing by the SVC). 87 | 88 | The window size and overlap should be wisely selected. Size of the window should be compared to the size of an expected car. These parameters were set to mimic perspective. 89 | 90 | There are some sample results for a fixed window size (128x128 px) and overlap for the provided test images: 91 | 92 | ![Test image 1](output_images/test1.jpg) 93 | ![Test image 2](output_images/test2.jpg) 94 | ![Test image 3](output_images/test3.jpg) 95 | ![Test image 4](output_images/test4.jpg) 96 | ![Test image 5](output_images/test5.jpg) 97 | ![Test image 6](output_images/test6.jpg) 98 | 99 | As we can see on examples above, the classifier successfully finds cars on the test images. However, there is a false positive example, so, we will need to apply a kind of filter (such as heat map) and the classifier failed to find a car on th 3rd image because it is too small for it. That is why, we will need to use multi scale windows. 100 | 101 | ### Implemented ideas: 102 | 103 | - Images were preprocessed by undistortion process from the [Advanced Lane Line](https://github.com/NikolasEnt/Advanced-Lane-Lines) finding project 104 | 105 | - To increase the classifier accuracy, feature extraction parameters were tuned. The data was augmented by flipped images. 106 | 107 | - To reduce number of false positives a heatmap with a threshold approach was implemented in the same to the suggested in the lectures way. For video the heatmap is accumulated by two frames which reduces number of outliers false positives. 108 | 109 | - To increase performance it is needed to analize the smallest possible number of windows. That is why, one can scan with a search window not across the whole image, but only areas where a new car can appear and also we are going to scan areas where a car was detected (track cars) 110 | 111 | *There is an example of a new car detection ROI:* 112 | 113 | ![New car detection ROI](readme_img/new_car_windows.jpg) 114 | 115 | *And a car tracking ROI:* 116 | 117 | ![New car detection ROI](readme_img/car_track_window.jpg) 118 | 119 | - It is important to use different scale of the classifiers window on different parts of the image due to perspective. So, different ROI window sizes were applied on different areas (realized in the `frame_proc` function). 120 | 121 | - In order to reduce jitter the function `filt` applies a simple low-pass filter on the new and the previous cars boxes coordinates and sizes (see under the *Frames processing* header) with weight `ALPHA=0.75` of the previous data. This makes car boundaries on video quite smooth. 122 | 123 | - To increase performance the analizys was skiped for every 2nd frame because we do not expect very fast moving of the detected cars. Known cars boundaries from the previous frame is used in such cases. 124 | 125 | ## The pipeline visualization 126 | 127 | Areas of interest for tracking of detected cars are marked green. Hot windows (which were classified as cars) are yellow. 128 | 129 | ![Image proc](readme_img/image_proc.jpg) 130 | 131 | The heatmap of found hot windows overlap: 132 | 133 | ![Heatmap](readme_img/heatmap.jpg) 134 | 135 | The final result of an image with cars boundaries and lane detection. 136 | 137 | ![Image proc](readme_img/image_fin.jpg) 138 | 139 | 140 | ## Results and discussion 141 | 142 | The pipeline is able to correctly lable cars areas on a video frames. The final video is [here](https://github.com/NikolasEnt/Vehicle-Detection-and-Tracking/blob/master/project_video_proc.mp4). The [Advanced Lane Line](https://github.com/NikolasEnt/Advanced-Lane-Lines) finding algorithm was added for the lane marking. 143 | 144 | - Of course, the algorithm may fail in case of difficult light conditions, which could be partly resolved by the classifier improvement. 145 | 146 | - It is possible to improve the classifier by additional data augmentation, hard negative mining, classifier parameters tuning etc. 147 | 148 | - The algorithm may have some problems in case of car overlaps another. To resolve this problem one may introduce long term memory of car position and a kind of predictive algorithm which can predict where occluded car can be and where it is worth to look for it. 149 | 150 | - To eliminate false positives on areas out of the road, one can deeply combine results from the Advanced Lane Line finding project to correctly determine the wide ROI on the whole frame by the road boundaries. Unfortunately, it was not correctly implemented (just hard coded, which is enought for the project but not a good implementation for a real-world application) due to time limitation. 151 | 152 | - The pipeline is not a real-time (about 4 fps with Lane line detection, which independently performs at 9 fps). One can further optimize number of features and feature extraction parameters as well as number of analyzed windows to increase the rate because lane line detection is quite fast. 153 | -------------------------------------------------------------------------------- /camera_cal/calibration.p: -------------------------------------------------------------------------------- 1 | (dp0 2 | S'mtx' 3 | p1 4 | cnumpy.core.multiarray 5 | _reconstruct 6 | p2 7 | (cnumpy 8 | ndarray 9 | p3 10 | (I0 11 | tp4 12 | S'b' 13 | p5 14 | tp6 15 | Rp7 16 | (I1 17 | (I3 18 | I3 19 | tp8 20 | cnumpy 21 | dtype 22 | p9 23 | (S'f8' 24 | p10 25 | I0 26 | I1 27 | tp11 28 | Rp12 29 | (I3 30 | S'<' 31 | p13 32 | NNNI-1 33 | I-1 34 | I0 35 | tp14 36 | bI00 37 | S'\n\xb4N\x0c\xdc\x07\x92@\x00\x00\x00\x00\x00\x00\x00\x00CB\xf0|\xaa\xed\x84@\x00\x00\x00\x00\x00\x00\x00\x00N\x85\xf5&\x1d\xf0\x91@\x07\x9aL\xc4\x93\x1ax@\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf0?' 38 | p15 39 | tp16 40 | bsS'dist' 41 | p17 42 | g2 43 | (g3 44 | (I0 45 | tp18 46 | g5 47 | tp19 48 | Rp20 49 | (I1 50 | (I1 51 | I5 52 | tp21 53 | g12 54 | I00 55 | S'v\xd7R\xee\xfd\xd9\xce\xbfD\x8d\xa6fx%\xab\xbf\xd9KI|\xf4\xf7R\xbf\x04\xf2^ \xc7\xc4 \xbfxn\x89\x17EE\x9b?' 56 | p22 57 | tp23 58 | bs. -------------------------------------------------------------------------------- /camera_cal/calibration1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration1.jpg -------------------------------------------------------------------------------- /camera_cal/calibration10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration10.jpg -------------------------------------------------------------------------------- /camera_cal/calibration11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration11.jpg -------------------------------------------------------------------------------- /camera_cal/calibration12.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration12.jpg -------------------------------------------------------------------------------- /camera_cal/calibration13.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration13.jpg -------------------------------------------------------------------------------- /camera_cal/calibration14.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration14.jpg -------------------------------------------------------------------------------- /camera_cal/calibration15.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration15.jpg -------------------------------------------------------------------------------- /camera_cal/calibration16.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration16.jpg -------------------------------------------------------------------------------- /camera_cal/calibration17.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration17.jpg -------------------------------------------------------------------------------- /camera_cal/calibration18.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration18.jpg -------------------------------------------------------------------------------- /camera_cal/calibration19.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration19.jpg -------------------------------------------------------------------------------- /camera_cal/calibration2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration2.jpg -------------------------------------------------------------------------------- /camera_cal/calibration20.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration20.jpg -------------------------------------------------------------------------------- /camera_cal/calibration3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration3.jpg -------------------------------------------------------------------------------- /camera_cal/calibration4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration4.jpg -------------------------------------------------------------------------------- /camera_cal/calibration5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration5.jpg -------------------------------------------------------------------------------- /camera_cal/calibration6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration6.jpg -------------------------------------------------------------------------------- /camera_cal/calibration7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration7.jpg -------------------------------------------------------------------------------- /camera_cal/calibration8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration8.jpg -------------------------------------------------------------------------------- /camera_cal/calibration9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/camera_cal/calibration9.jpg -------------------------------------------------------------------------------- /laneline.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import glob 3 | import pickle 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | import matplotlib.image as mpimg 7 | from sklearn.metrics import mean_squared_error 8 | 9 | x_cor = 9 #Number of corners to find 10 | y_cor = 6 11 | # Prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) 12 | objp = np.zeros((y_cor*x_cor,3), np.float32) 13 | objp[:,:2] = np.mgrid[0:x_cor, 0:y_cor].T.reshape(-1,2) 14 | 15 | def camera_cal(): 16 | # Arrays to store object points and image points from all the images. 17 | objpoints = [] # 3d points in real world space 18 | imgpoints = [] # 2d points in image plane. 19 | images = glob.glob('camera_cal/calibration*.jpg') # Make a list of paths to calibration images 20 | # Step through the list and search for chessboard corners 21 | corners_not_found = [] #Calibration images in which opencv failed to find corners 22 | for idx, fname in enumerate(images): 23 | img = cv2.imread(fname) 24 | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Conver to grayscale 25 | ret, corners = cv2.findChessboardCorners(gray, (x_cor,y_cor), None) # Find the chessboard corners 26 | # If found, add object points, image points 27 | if ret == True: 28 | objpoints.append(objp) 29 | imgpoints.append(corners) 30 | else: 31 | corners_not_found.append(fname) 32 | print 'Corners were found on', str(len(imgpoints)), 'out of', str(len(images)), 'it is', str(len(imgpoints)*100.0/len(images)),'% of calibration images' 33 | img_size = (img.shape[1], img.shape[0]) 34 | # Do camera calibration given object points and image points 35 | ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None) 36 | return mtx, dist 37 | 38 | mtx, dist = camera_cal() 39 | 40 | def undistort(img): 41 | return cv2.undistort(img, mtx, dist, None, mtx) 42 | 43 | def eq_Hist(img): # Histogram normalization 44 | img[:, :, 0] = cv2.equalizeHist(img[:, :, 0]) 45 | img[:, :, 1] = cv2.equalizeHist(img[:, :, 1]) 46 | img[:, :, 2] = cv2.equalizeHist(img[:, :, 2]) 47 | return img 48 | 49 | # Sobel 50 | def sobel_img(img, thresh_min = 25, thresh_max = 255, sobel_kernel = 11): 51 | sobelx = np.absolute(cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)) 52 | sobely = np.absolute(cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)) 53 | scaled_sobelx = np.uint16(255*sobelx/np.max(sobelx)) 54 | scaled_sobely = np.uint16(255*sobely/np.max(sobely)) 55 | sobel_sum = scaled_sobelx+0.2*scaled_sobely 56 | scaled_sobel_sum = np.uint8(255*sobel_sum/np.max(sobel_sum)) 57 | sum_binary = np.zeros_like(scaled_sobel_sum) 58 | sum_binary[(scaled_sobel_sum >= thresh_min) & (scaled_sobel_sum <= thresh_max)] = 1 59 | return sum_binary 60 | 61 | # Solbel magnitude 62 | def sobel_mag_img(img, thresh_min = 25, thresh_max = 255, sobel_kernel = 11): 63 | sobelx = np.absolute(cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)) 64 | sobely = np.absolute(cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)) 65 | gradmag = np.sqrt(sobelx**2 + sobely**2) 66 | scaled_gradmag = np.uint8(255*gradmag/np.max(gradmag)) 67 | gradmag_binary = np.zeros_like(scaled_gradmag) 68 | gradmag_binary[(scaled_gradmag >= thresh_min) & (scaled_gradmag <= thresh_max)] = 1 69 | return gradmag_binary 70 | 71 | # Sobel direction 72 | def sobel_dir_img(img, thresh_min = 0.0, thresh_max = 1.5, sobel_kernel = 11): 73 | sobelx = np.absolute(cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)) 74 | sobely = np.absolute(cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)) 75 | graddir = np.arctan2(sobely, sobelx) 76 | graddir_binary = np.zeros_like(graddir) 77 | graddir_binary[(graddir >= thresh_min) & (graddir <= thresh_max)] = 1 78 | return graddir_binary 79 | 80 | # Binary red channel threshold 81 | def red_thres(img, thresh_min = 25, thresh_max = 255): 82 | red = img[:,:,2] 83 | red_binary = np.zeros_like(red) 84 | red_binary[(red >= thresh_min) & (red <= thresh_max)] = 1 85 | return red_binary 86 | 87 | # Binary saturation channel threshold 88 | def s_thres(img, thresh_min = 25, thresh_max = 255): 89 | hls = cv2.cvtColor(img, cv2.COLOR_BGR2HLS) 90 | s_channel = hls[:,:,2] 91 | s_binary = np.zeros_like(s_channel) 92 | s_binary[(s_channel > thresh_min) & (s_channel <= thresh_max)] = 1 93 | return s_binary 94 | 95 | # Return saturation channel 96 | def s_hls(img): 97 | hls = cv2.cvtColor(img, cv2.COLOR_BGR2HLS) 98 | return hls[:,:,2] 99 | 100 | 101 | IMAGE_H = 223 102 | IMAGE_W = 1280 103 | 104 | # Sharpen image 105 | def sharpen_img(img): 106 | gb = cv2.GaussianBlur(img, (5,5), 20.0) 107 | return cv2.addWeighted(img, 2, gb, -1, 0) 108 | 109 | # Compute linear image transformation img*s+m 110 | def lin_img(img,s=1.0,m=0.0): 111 | img2=cv2.multiply(img, np.array([s])) 112 | return cv2.add(img2, np.array([m])) 113 | 114 | # Change image contrast; s>1 - increase 115 | def contr_img(img, s=1.0): 116 | m=127.0*(1.0-s) 117 | return lin_img(img, s, m) 118 | 119 | # Create perspective image transformation matrices 120 | def create_M(): 121 | src = np.float32([[0, 673], [1207, 673], [0, 450], [1280, 450]]) 122 | dst = np.float32([[569, 223], [711, 223], [0, 0], [1280, 0]]) 123 | M = cv2.getPerspectiveTransform(src, dst) 124 | Minv = cv2.getPerspectiveTransform(dst, src) 125 | return M, Minv 126 | 127 | # Main image transformation routine to get a warped image 128 | def transform(img, M): 129 | undist = undistort(img) 130 | img_size = (1280, 223) 131 | warped = cv2.warpPerspective(undist, M, img_size) 132 | warped = sharpen_img(warped) 133 | warped = contr_img(warped, 1.1) 134 | return warped 135 | 136 | # Show original and warped image side by side 137 | def show_warped(img, M): 138 | f, (plot1, plot2) = plt.subplots(1, 2, figsize=(9, 3)) 139 | plot1.imshow(cv2.cvtColor(undistort(img), cv2.COLOR_BGR2RGB)) 140 | plot1.set_title('Undistorted', fontsize=20) 141 | plot2.imshow(cv2.cvtColor(transform(img, M), cv2.COLOR_BGR2RGB)) 142 | plot2.set_title('Warped', fontsize=20) 143 | 144 | # Show one image 145 | def show_img(img): 146 | if len(img.shape)==3: 147 | plt.figure() 148 | plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) 149 | else: 150 | plt.figure() 151 | plt.imshow(img, cmap='gray') 152 | 153 | M, Minv = create_M() 154 | 155 | #Calculate coefficients of polynom in y+h coordinates, i.e. f(y) -> f(y+h) 156 | def pol_shift(pol, h): 157 | pol_ord = len(pol)-1 # Determinate degree of the polynomial 158 | if pol_ord == 3: 159 | pol0 = pol[0] 160 | pol1 = pol[1] + 3.0*pol[0]*h 161 | pol2 = pol[2] + 3.0*pol[0]*h*h + 2.0*pol[1]*h 162 | pol3 = pol[3] + pol[0]*h*h*h + pol[1]*h*h + pol[2]*h 163 | return(np.array([pol0, pol1, pol2, pol3])) 164 | if pol_ord == 2: 165 | pol0 = pol[0] 166 | pol1 = pol[1] + 2.0*pol[0]*h 167 | pol2 = pol[2] + pol[0]*h*h+pol[1]*h 168 | return(np.array([pol0, pol1, pol2])) 169 | if pol_ord == 1: 170 | pol0 = pol[0] 171 | pol1 = pol[1] + pol[0]*h 172 | return(np.array([pol0, pol1])) 173 | 174 | 175 | # Calculate derivative for a polynom pol in a point x 176 | def pol_d(pol, x): 177 | pol_ord = len(pol)-1 178 | if pol_ord == 3: 179 | return 3.0*pol[0]*x*x+2.0*pol[1]*x+pol[2] 180 | if pol_ord == 2: 181 | return 2.0*pol[0]*x+pol[1] 182 | if pol_ord == 1: 183 | return pol[0]#*np.ones(len(np.array(x))) 184 | 185 | # Calculate the second derivative for a polynom pol in a point x 186 | def pol_dd(pol, x): 187 | pol_ord = len(pol)-1 188 | if pol_ord == 3: 189 | return 6.0*pol[0]*x+2.0*pol[1] 190 | if pol_ord == 2: 191 | return 2.0*pol[0] 192 | if pol_ord == 1: 193 | return 0.0 194 | 195 | # Calculate a polinomial value in a point x 196 | def pol_calc(pol, x): 197 | pol_f = np.poly1d(pol) 198 | return(pol_f(x)) 199 | 200 | xm_in_px = 3.675 / 85 # Lane width (12 ft in m) is ~85 px on image 201 | ym_in_px = 3.048 / 24 # Dashed line length (10 ft in m) is ~24 px on image 202 | 203 | def px_to_m(px): 204 | return xm_in_px*px 205 | 206 | # Calculate offset from the lane center 207 | def lane_offset(left, right): 208 | offset = 1280/2.0-(pol_calc(left, 1.0)+ pol_calc(right, 1.0))/2.0 209 | return px_to_m(offset) 210 | 211 | # Calculate radius of curvature 212 | MAX_RADIUS = 10000 213 | def r_curv(pol, y): 214 | if len(pol) == 2: # If the polinomial is a linear function 215 | return MAX_RADIUS 216 | else: 217 | y_pol = np.linspace(0, 1, num=EQUID_POINTS) 218 | x_pol = pol_calc(pol, y_pol)*xm_in_px 219 | y_pol = y_pol*IMAGE_H*ym_in_px 220 | pol = np.polyfit(y_pol, x_pol, len(pol)-1) 221 | d_y = pol_d(pol, y) 222 | dd_y = pol_dd(pol, y) 223 | r = ((np.sqrt(1+d_y**2))**3)/abs(dd_y) 224 | if r > MAX_RADIUS: 225 | r = MAX_RADIUS 226 | return r 227 | 228 | def lane_curv(left, right): 229 | l = r_curv(left, 1.0) 230 | r = r_curv(right, 1.0) 231 | if l < MAX_RADIUS and r < MAX_RADIUS: 232 | return (r_curv(left, 1.0)+r_curv(right, 1.0))/2.0 233 | else: 234 | if l < MAX_RADIUS: 235 | return l 236 | if r < MAX_RADIUS: 237 | return r 238 | return MAX_RADIUS 239 | 240 | #Calculate approximated equidistant to a parabola 241 | EQUID_POINTS = 25 # Number of points to use for the equidistant approximation 242 | def equidistant(pol, d, max_l = 1, plot = False): 243 | y_pol = np.linspace(0, max_l, num=EQUID_POINTS) 244 | x_pol = pol_calc(pol, y_pol) 245 | y_pol *= IMAGE_H # Convert y coordinates bach to [0..223] scale 246 | x_m = [] 247 | y_m = [] 248 | k_m = [] 249 | for i in range(len(x_pol)-1): 250 | x_m.append((x_pol[i+1]-x_pol[i])/2.0+x_pol[i]) # Calculate polints position between given points 251 | y_m.append((y_pol[i+1]-y_pol[i])/2.0+y_pol[i]) 252 | if x_pol[i+1] == x_pol[i]: 253 | k_m.append(1e8) # A vary big number 254 | else: 255 | k_m.append(-(y_pol[i+1]-y_pol[i])/(x_pol[i+1]-x_pol[i])) # Slope of perpendicular lines 256 | x_m = np.array(x_m) 257 | y_m = np.array(y_m) 258 | k_m = np.array(k_m) 259 | #Calculate equidistant points 260 | y_eq = d*np.sqrt(1.0/(1+k_m**2)) 261 | x_eq = np.zeros_like(y_eq) 262 | if d >= 0: 263 | for i in range(len(x_m)): 264 | if k_m[i] < 0: 265 | y_eq[i] = y_m[i]-abs(y_eq[i]) 266 | else: 267 | y_eq[i] = y_m[i]+abs(y_eq[i]) 268 | x_eq[i] = (x_m[i]-k_m[i]*y_m[i])+k_m[i]*y_eq[i] 269 | else: 270 | for i in range(len(x_m)): 271 | if k_m[i] < 0: 272 | y_eq[i] = y_m[i]+abs(y_eq[i]) 273 | else: 274 | y_eq[i] = y_m[i]-abs(y_eq[i]) 275 | x_eq[i] = (x_m[i]-k_m[i]*y_m[i])+k_m[i]*y_eq[i] 276 | y_eq /= IMAGE_H # Convert y coordinates back to [0..1] scale 277 | y_pol /= IMAGE_H 278 | y_m /= IMAGE_H 279 | pol_eq = np.polyfit(y_eq, x_eq, len(pol)-1) # Fit equidistant with a polinomial 280 | if plot: 281 | plt.plot(x_pol, y_pol, color='red', linewidth=1, label = 'Original line') #Original line 282 | plt.plot(x_eq, y_eq, color='green', linewidth=1, label = 'Equidistant') #Equidistant 283 | plt.plot(pol_calc(pol_eq, y_pol), y_pol, color='blue', 284 | linewidth=1, label = 'Approximation') #Approximation 285 | plt.legend() 286 | for i in range(len(x_m)): 287 | plt.plot([x_m[i],x_eq[i]], [y_m[i],y_eq[i]], color='black', linewidth=1) #Draw connection lines 288 | plt.savefig('readme_img/equid.jpg') 289 | return pol_eq 290 | 291 | DEV_POL = 2 # Max mean squared error of the approximation 292 | MSE_DEV = 1.1 # Minimum mean squared error ratio to consider higher order of the polynomial 293 | def best_pol_ord(x, y): 294 | pol1 = np.polyfit(y,x,1) 295 | pred1 = pol_calc(pol1, y) 296 | mse1 = mean_squared_error(x, pred1) 297 | if mse1 < DEV_POL: 298 | return pol1, mse1 299 | pol2 = np.polyfit(y,x,2) 300 | pred2 = pol_calc(pol2, y) 301 | mse2 = mean_squared_error(x, pred2) 302 | if mse2 < DEV_POL or mse1/mse2 < MSE_DEV: 303 | return pol2, mse2 304 | else: 305 | pol3 = np.polyfit(y,x,3) 306 | pred3 = pol_calc(pol3, y) 307 | mse3 = mean_squared_error(x, pred3) 308 | if mse2/mse3 < MSE_DEV: 309 | return pol2, mse2 310 | else: 311 | return pol3, mse3 312 | 313 | # Smooth polinomial functions of different degrees 314 | def smooth_dif_ord(pol_p, x, y, new_ord): 315 | x_p = pol_calc(pol_p, y) 316 | x_new = (x+x_p)/2.0 317 | return np.polyfit(y, x_new, new_ord) 318 | 319 | # Calculate threashold for left line 320 | def thres_l_calc(sens): 321 | thres = -0.0045*sens**2+1.7581*sens-115.0 322 | if thres < 25*(382.0-sens)/382.0+5: 323 | thres = 25*(382.0-sens)/382.0+5 324 | return thres 325 | 326 | # Calculate threashold for right line 327 | def thres_r_calc(sens): 328 | thres = -0.0411*sens**2+9.1708*sens-430.0 329 | if sens<210: 330 | if thres < sens/6: 331 | thres = sens/6 332 | else: 333 | if thres < 20: 334 | thres = 20 335 | return thres 336 | 337 | WINDOW_SIZE = 15 # Half of the sensor span 338 | DEV = 7 # Maximum of the point deviation from the sensor center 339 | SPEED = 2 / IMAGE_H # Pixels shift per frame 340 | POL_ORD = 2 # Default polinomial order 341 | RANGE = 0.0 # Fraction of the image to skip 342 | 343 | def find(img, left=True, p_ord=POL_ORD, pol = np.zeros(POL_ORD+1), max_n = 0): 344 | x_pos = [] 345 | y_pos = [] 346 | max_l = img.shape[0] #number of lines in the img 347 | for i in range(max_l-int(max_l*RANGE)): 348 | y = max_l-i #Line number 349 | y_01 = y / float(max_l) #y in [0..1] scale 350 | if abs(pol[-1]) > 0: #If it not a still image or the first video frame 351 | if y_01 >= max_n + SPEED: # If we can use pol to find center of the virtual sensor from the previous frame 352 | cent = int(pol_calc(pol, y_01-SPEED)) 353 | if y == max_l: 354 | if left: 355 | cent = 605 356 | else: 357 | cent = 690 358 | else: # Prolong the pol tangentially 359 | k = pol_d(pol, max_n) 360 | b = pol_calc(pol, max_n)-k*max_n 361 | cent = int(k*y_01+b) 362 | if cent > 1280-WINDOW_SIZE: 363 | cent = 1280-WINDOW_SIZE 364 | if cent < WINDOW_SIZE: 365 | cent = WINDOW_SIZE 366 | else: #If it is a still image 367 | if len(x_pos) > 0: # If there are some points detected 368 | cent = x_pos[-1] # Use the previous point as a senser center 369 | else: #Initial guess on line position 370 | if left: 371 | cent = 605 372 | else: 373 | cent = 690 374 | if left: #Subsample image 375 | sens = 0.5*s_hls(img[max_l-1-i:max_l-i,cent-WINDOW_SIZE:cent+WINDOW_SIZE,:])\ 376 | +img[max_l-1-i:max_l-i,cent-WINDOW_SIZE:cent+WINDOW_SIZE,2] 377 | else: 378 | sens = img[max_l-1-i:max_l-i,cent-WINDOW_SIZE:cent+WINDOW_SIZE,2] 379 | if len(sens[0,:]) < WINDOW_SIZE: #If we out of the image 380 | break 381 | x_max = max(sens[0,:]) #Find maximal value on the sensor 382 | sens_mean = np.mean(sens[0,:]) 383 | # Get threshold 384 | if left: 385 | loc_thres = thres_l_calc(sens_mean) 386 | loc_dev = DEV 387 | else: 388 | loc_thres = thres_r_calc(sens_mean) 389 | loc_dev = DEV 390 | if len(x_pos) == 0: 391 | loc_dev = WINDOW_SIZE 392 | if (x_max-sens_mean) > loc_thres and (x_max>100 or left): 393 | if left: 394 | x = list(reversed(sens[0,:])).index(x_max) 395 | x = cent+WINDOW_SIZE-x 396 | else: 397 | x = list(sens[0,:]).index(x_max) 398 | x = cent-WINDOW_SIZE+x 399 | if x-1 < 569.0*y_01 or x+1 > 569.0*y_01+711 or np.nonzero(sens[0,:]) < WINDOW_SIZE: #if the sensor touchs black triangle 400 | break # We are done 401 | if abs(pol[-1]) < 1e-4: # If there are no polynomial provided 402 | x_pos.append(x) 403 | y_pos.append(y_01) 404 | else: 405 | if abs(x-cent) < loc_dev:#*14.206*r_curv(pol, max_l)**-0.2869: 406 | x_pos.append(x) 407 | y_pos.append(y_01) 408 | if len(x_pos) > 1: 409 | return x_pos, y_pos 410 | else: 411 | return [0], [0.0] 412 | 413 | 414 | RANGE = 0.0 415 | def get_lane(img, plot=False): 416 | warp = transform(img, M) 417 | img = undistort(img) 418 | ploty = np.linspace(0, 1, num=warp.shape[0]) 419 | x2, y2 = find(warp) 420 | x, y = find(warp, False) 421 | right_fitx = pol_calc(best_pol_ord(x,y)[0], ploty) 422 | left_fitx = pol_calc(best_pol_ord(x2,y2)[0], ploty) 423 | y2 = np.int16(np.array(y2)*223.0) # Convert into [0..223] scale 424 | y = np.int16(np.array(y)*223.0) 425 | if plot: 426 | for i in range(len(x)): # Plot points 427 | cv2.circle(warp, (x[i], y[i]), 1, (255,50,255)) 428 | for i in range(len(x2)): 429 | cv2.circle(warp, (x2[i], y2[i]), 1, (255,50,250)) 430 | show_img(warp) 431 | plt.axis('off') 432 | plt.plot(left_fitx, ploty*IMAGE_H, color='green', linewidth=1) 433 | plt.plot(right_fitx, ploty*IMAGE_H, color='green', linewidth=1) 434 | cv2.imwrite('img.jpg', warp) 435 | return img, left_fitx, right_fitx, ploty*IMAGE_H 436 | 437 | def draw_lane_img_p(img_path): 438 | return cv2.imread(img_path) 439 | 440 | def draw_lane(img, video=False): 441 | if video: 442 | img, left_fitx, right_fitx, ploty, left, right = get_lane_video(img) 443 | else: 444 | img, left_fitx, right_fitx, ploty = get_lane(img, False) 445 | warp_zero = np.zeros((IMAGE_H,IMAGE_W)).astype(np.uint8) 446 | color_warp = np.dstack((warp_zero, warp_zero, warp_zero)) 447 | # Recast the x and y points into usable format for cv2.fillPoly() 448 | pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) 449 | pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) 450 | pts = np.hstack((pts_left, pts_right)) 451 | # Draw the lane onto the warped blank image 452 | cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0)) 453 | # Warp the blank back to original image space using inverse perspective matrix (Minv) 454 | newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0])) 455 | # Combine the result with the original image 456 | result = cv2.addWeighted(img, 1.0, newwarp, 0.6, 0) 457 | if video: 458 | # Add text information on the frame 459 | font = cv2.FONT_HERSHEY_SIMPLEX 460 | text_pos = 'Pos of the car: '+str(np.round(lane_offset(left, right),2))+ ' m' 461 | radius = np.round(lane_curv(left, right),2) 462 | if radius >= MAX_RADIUS: 463 | radius = 'Inf' 464 | else: 465 | radius = str(radius) 466 | text_rad = 'Radius: '+radius+ ' m' 467 | cv2.putText(result,text_pos,(10,25), font, 1,(255,255,255),2) 468 | cv2.putText(result,text_rad,(10,75), font, 1,(255,255,255),2) 469 | return(result) 470 | 471 | 472 | right_fit_p = np.zeros(POL_ORD+1) 473 | left_fit_p = np.zeros(POL_ORD+1) 474 | r_len = 0 475 | l_len = 0 476 | lane_w_p = 90 477 | 478 | MIN = 60 # Minimal line separation (in px) 479 | MAX = 95 # Maximal line separation (in px) 480 | MIN_POINTS = 10 #Minimal points to consider a line 481 | MAX_N = 5 # Maximal frames without line detected to use previous frame 482 | n_count = 0 # Frame counter 483 | r_n = 0 # Number of frames with unsuccessful line detection 484 | l_n = 0 485 | 486 | def get_lane_video(img): 487 | global right_fit_p, left_fit_p, r_len, l_len, n_count, r_n, l_n 488 | sw = False 489 | warp = transform(img, M) 490 | img = undistort(img) 491 | if l_n < MAX_N and n_count > 0: 492 | x, y = find(warp, pol = left_fit_p, max_n = l_len) 493 | else: 494 | x, y = find(warp) 495 | if len(x) > MIN_POINTS: 496 | left_fit, mse_l = best_pol_ord(x,y) 497 | if mse_l > DEV_POL*9 and n_count > 0: 498 | left_fit = left_fit_p 499 | l_n += 1 500 | else: 501 | l_n /= 2 502 | else: 503 | left_fit = left_fit_p 504 | l_n += 1 505 | if r_n < MAX_N and n_count > 0: 506 | x2, y2 = find(warp, False, pol = right_fit_p, max_n = r_len) 507 | else: 508 | x2, y2 = find(warp, False) 509 | if len(x2) > MIN_POINTS: 510 | right_fit, mse_r = best_pol_ord(x2, y2) 511 | if mse_r > DEV_POL*9 and n_count > 0: 512 | right_fit = right_fit_p 513 | r_n += 1 514 | else: 515 | r_n /= 2 516 | else: 517 | right_fit = right_fit_p 518 | r_n += 1 519 | if n_count > 0: # if not the first video frame 520 | # Apply filter 521 | if len(left_fit_p) == len(left_fit): # If new and prev polinomial have the same order 522 | left_fit = pol_shift(left_fit_p, -SPEED)*(1.0-len(x)/((1.0-RANGE)*IMAGE_H))+left_fit*(len(x)/((1.0-RANGE)*IMAGE_H)) 523 | else: 524 | left_fit = smooth_dif_ord(left_fit_p, x, y, len(left_fit)-1) 525 | l_len = y[-1] 526 | if len(right_fit_p) == len(right_fit): 527 | right_fit = pol_shift(right_fit_p, -SPEED)*(1.0-len(x2)/((1.0-RANGE)*IMAGE_H))+right_fit*(len(x2)/((1.0-RANGE)*IMAGE_H)) 528 | else: 529 | right_fit = smooth_dif_ord(right_fit_p, x2, y2, len(right_fit)-1) 530 | r_len = y2[-1] 531 | 532 | if len(x) > MIN_POINTS and len(x2) <= MIN_POINTS: # If we have only left line 533 | lane_w = pol_calc(right_fit_p, 1.0)-pol_calc(left_fit_p, 1.0) 534 | right_fit = smooth_dif_ord(right_fit_p, pol_calc(equidistant(left_fit, lane_w, max_l=l_len), y), 535 | y, len(left_fit)-1) 536 | r_len = l_len 537 | r_n /=2 538 | if len(x2) > MIN_POINTS and len(x) <= MIN_POINTS: # If we have only right line 539 | lane_w = pol_calc(right_fit_p, 1.0)-pol_calc(left_fit_p, 1.0) 540 | #print(lane_w) 541 | left_fit = smooth_dif_ord(left_fit_p, pol_calc(equidistant(right_fit, -lane_w, max_l=r_len), y2), 542 | y2, len(right_fit)-1) 543 | l_len = r_len 544 | l_n /=2 545 | if (l_n < MAX_N and r_n < MAX_N): 546 | max_y = max(RANGE, l_len, r_len) 547 | else: 548 | max_y = 1.0#max(RANGE, l_len, r_len) 549 | sw = True 550 | d1 = pol_calc(right_fit, 1.0)-pol_calc(left_fit, 1.0) 551 | dm = pol_calc(right_fit, max_y)-pol_calc(left_fit, max_y) 552 | if (d1 > MAX or d1 < 60 or dm < 0): 553 | left_fit = left_fit_p 554 | right_fit = right_fit_p 555 | l_n += 1 556 | r_n += 1 557 | ploty = np.linspace(max_y, 1, num=IMAGE_H) 558 | left_fitx = pol_calc(left_fit, ploty) 559 | right_fitx = pol_calc(right_fit, ploty) 560 | right_fit_p = np.copy(right_fit) 561 | left_fit_p = np.copy(left_fit) 562 | n_count += 1 563 | return img, left_fitx, right_fitx, ploty*223.0, left_fit, right_fit 564 | 565 | def init_params(ran): 566 | global right_fit_p, left_fit_p, n_count, RANGE, MIN_POINTS 567 | right_fit_p = np.zeros(POL_ORD+1) 568 | left_fit_p = np.zeros(POL_ORD+1) 569 | n_count = 0 570 | RANGE = ran 571 | MIN_POINTS = 25-15*ran 572 | -------------------------------------------------------------------------------- /output_images/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/output_images/.DS_Store -------------------------------------------------------------------------------- /output_images/save_output_here.txt: -------------------------------------------------------------------------------- 1 | Please save your output images to this folder and include a description in your README of what each image shows. -------------------------------------------------------------------------------- /output_images/test1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/output_images/test1.jpg -------------------------------------------------------------------------------- /output_images/test2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/output_images/test2.jpg -------------------------------------------------------------------------------- /output_images/test3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/output_images/test3.jpg -------------------------------------------------------------------------------- /output_images/test4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/output_images/test4.jpg -------------------------------------------------------------------------------- /output_images/test5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/output_images/test5.jpg -------------------------------------------------------------------------------- /output_images/test6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/output_images/test6.jpg -------------------------------------------------------------------------------- /project_video.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/project_video.mp4 -------------------------------------------------------------------------------- /project_video_proc.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/project_video_proc.mp4 -------------------------------------------------------------------------------- /readme_img/car_track_window.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/readme_img/car_track_window.jpg -------------------------------------------------------------------------------- /readme_img/ex.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/readme_img/ex.jpg -------------------------------------------------------------------------------- /readme_img/heatmap.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/readme_img/heatmap.jpg -------------------------------------------------------------------------------- /readme_img/hog.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/readme_img/hog.jpg -------------------------------------------------------------------------------- /readme_img/image_fin.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/readme_img/image_fin.jpg -------------------------------------------------------------------------------- /readme_img/image_proc.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/readme_img/image_proc.jpg -------------------------------------------------------------------------------- /readme_img/new_car_windows.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/readme_img/new_car_windows.jpg -------------------------------------------------------------------------------- /readme_img/title.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/readme_img/title.gif -------------------------------------------------------------------------------- /test_images/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/test_images/.DS_Store -------------------------------------------------------------------------------- /test_images/test1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/test_images/test1.jpg -------------------------------------------------------------------------------- /test_images/test2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/test_images/test2.jpg -------------------------------------------------------------------------------- /test_images/test3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/test_images/test3.jpg -------------------------------------------------------------------------------- /test_images/test4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/test_images/test4.jpg -------------------------------------------------------------------------------- /test_images/test5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/test_images/test5.jpg -------------------------------------------------------------------------------- /test_images/test6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NikolasEnt/Vehicle-Detection-and-Tracking/3d42dfdb1ee8f5202cdfb7e25e392faed98b0917/test_images/test6.jpg --------------------------------------------------------------------------------