├── README.md
├── camera_cal
├── calibration1.jpg
├── calibration10.jpg
├── calibration11.jpg
├── calibration12.jpg
├── calibration13.jpg
├── calibration14.jpg
├── calibration15.jpg
├── calibration16.jpg
├── calibration17.jpg
├── calibration18.jpg
├── calibration19.jpg
├── calibration2.jpg
├── calibration20.jpg
├── calibration3.jpg
├── calibration4.jpg
├── calibration5.jpg
├── calibration6.jpg
├── calibration7.jpg
├── calibration8.jpg
└── calibration9.jpg
├── challenge_video.mp4
├── example_writeup.pdf
├── examples
├── .ipynb_checkpoints
│ └── example-checkpoint.ipynb
├── binary_combo_example.jpg
├── color_fit_lines.jpg
├── example.ipynb
├── example.py
├── example_output.jpg
├── undistort_output.png
└── warped_straight_lines.jpg
├── harder_challenge_video.mp4
├── images
├── binarizer.png
├── binary_input.png
├── camera_calibrator.png
├── histogram.png
├── image_output.png
├── lane_pixels.png
├── pipeline.png
├── sample_ouput.png
├── undistorted.png
├── video_output.gif
└── warp.png
├── latex
└── pipeline.tex
├── output_images
├── .DS_Store
├── camera_cal
│ ├── calibration1.jpg
│ ├── calibration10.jpg
│ ├── calibration11.jpg
│ ├── calibration12.jpg
│ ├── calibration13.jpg
│ ├── calibration14.jpg
│ ├── calibration15.jpg
│ ├── calibration16.jpg
│ ├── calibration17.jpg
│ ├── calibration18.jpg
│ ├── calibration19.jpg
│ ├── calibration2.jpg
│ ├── calibration20.jpg
│ ├── calibration3.jpg
│ ├── calibration4.jpg
│ ├── calibration5.jpg
│ ├── calibration6.jpg
│ ├── calibration7.jpg
│ ├── calibration8.jpg
│ └── calibration9.jpg
└── save_output_here.txt
├── processed_project_video.mp4
├── project_video.mp4
├── src
├── Advanced_Lane_Line_Finding.ipynb
└── advanced_lane_finding.py
├── test_images
├── straight_lines1.jpg
├── straight_lines2.jpg
├── test1.jpg
├── test2.jpg
├── test3.jpg
├── test4.jpg
├── test5.jpg
└── test6.jpg
└── writeup_template.md
/README.md:
--------------------------------------------------------------------------------
1 | # Self-Driving Car Engineer Nanodegree
2 | ## Computer Vision: Advanced Lane Finding
3 |
4 | ### Overview
5 |
6 | The objective of this project is to identify lane lines using traditional computer vision techniques. In order to achieve this, we have designed a computer vision pipeline as depicted below.
7 |
8 |
9 |
10 |
11 |
12 | The input of our pipeline would be images or video clips. Input images or videos will go through various pipeline stages (will be discussed in the latter part of this document) and produce annotated video and images as given below.
13 |
14 | Image | Video
15 | ------------|---------------
16 |  | 
17 |
18 | Next, we are going to describe our pipeline stages starting from the Camera Calibrator. The sample input and output of the Camera Calibrator pipeline stage is given below.
19 |
20 | ### Camera Calibrator
21 |
22 | Camera calibration logic is encapsulated in **`CameraCalibrator`** class in the **`advanced_lane_finding.py`** module. This class's constructor takes following arguments.
23 |
24 | 1. A list of camera images which we are going to use for camera calibration. (Usually, we use chessboard images)
25 | 2. Number of corners in X direction
26 | 3. Number of corners in Y direction
27 | 4. A boolean flag, if it is True, we do camera calibration and store those calibration data.
28 |
29 | The public method of this **`CameraCalibrator`** class is **`undistort`** and it takes a distorted image as the input and produces an undistorted image.
30 |
31 |
32 |
33 |
34 |
35 | Following image shows before and after applying distortion correction to a typical road image.
36 |
37 |
38 |
39 |
40 |
41 | ### Warp Transformer
42 |
43 | The second step of the lane line finding pipeline is "perspective transformation" step. In computer vision, an image perspective is a phenomenon where objects appear smaller the further away they are from a viewpoint.
44 |
45 | A perspective transform maps the points in a given image to different, desired, image points with a new perspective. In the project we are going to use bird’s-eye view transform that allows us to view a lane from above; this will be useful for calculating the lane curvature in step 4.
46 |
47 | Warped operation is encapsulated in **`PerspectiveTransformer`** class of the **`advanced_lane_finding.py`** package located in **`$PROJECT_HOME/src`** folder. In order to create an instance of **`PerspectiveTransformer`** class, we need to provide four source and destination points. In order to clearly visible lane lines, we have selected following source and destination points.
48 |
49 | |Source Points | Destination Points|
50 | |--------------|-------------------|
51 | |(253, 697) | (303, 697) |
52 | |(585, 456) | (303, 0) |
53 | |(700, 456) | (1011, 0) |
54 | |(1061, 690) | (1011, 690) |
55 |
56 |
57 | I verified the performance of my perspective transformation by transforming an image (**`../output_images/undistorted_test_images/straight_lines2.jpg`**) using above source and destination points as given below.
58 |
59 |
60 |
61 |
62 |
63 | ### Binarizer
64 |
65 | Correctly identifying lane line pixels is one of the main tasks of this project. In order to identify lane line, we have used three main techniques namely:
66 |
67 | 1. Sobel operation in X direction
68 | 2. Color thresholding in S component of the HLS color space.
69 | 3. Color thresholding in L component of the HLS color space.
70 |
71 | These three operations are encapsulated in the method called **`binarize`** in **`advanced_lane_finding.py`** module located in **`$PROJECT_HOME/src`** folder.
72 |
73 | Also, below shows the `binarize` operation applied to a sample image.
74 |
75 |
76 |
77 |
78 |
79 | ### Lane Line Extractor
80 |
81 | Now we have extracted lane line pixels. So next step would be calculating the road curvature and other necessary quantities (such as how much the vehicle off from the center of the lane)
82 |
83 | In order to calculate road curvature, we have used two methods as given below.
84 |
85 | 1. **`naive_lane_extractor(self, binary_warped)`** (inside the **Line** class in advanced_line_finding module)
86 | 2. **`smart_lane_extractor(self, binary_warped)`** (inside the **Line** class in advanced_line_finding module
87 |
88 | Both methods take a binary warped image (similar to one shown above) and produce X coordinates of both left and right lane lines. `naive_lane_extractor(self, binary_warped)` method uses **sliding window** to identify lane lines from the binary warped image and then uses a second order polynomial estimation technique to calculate road curvature.
89 |
90 | The complete description of our algorithm in given in [Advanced Lane Line Finding](https://github.com/upul/CarND-Advanced-Lane-Lines/blob/master/src/Advanced_Lane_Line_Finding.ipynb) notebook.
91 |
92 | The output of lane line extractor algorithm is visualize in following figure.
93 |
94 |
95 |
96 |
97 |
98 | When it comes to video processing we start (with the very first image in the video) with **``naive_lane_extractor(self, binary_warped``** method. Once we have identified lane lines we moves to the **``smart_lane_extractor(self, binary_warped)``** which doesn't blindly search entire image but uses lane lines identify in the previous image in the video.
99 |
100 | ### Lane Line Curvature Calculator
101 |
102 | We have created a utility mehtod called **``def calculate_road_info(self, image_size, left_x, right_x)``** inside of the **``Line``** class. It takes size of the image (**``image_size``**), left lane line pixels (**``left_x``**) and right lane line pixels (**``right_x``**) as arguments returns following information.
103 |
104 | 1. **``left_curverad``** : Curvature of the left road line in meters.
105 | 2. **``right_curverad``** : Curvature of the right road line in meters.
106 | 3. **``lane_deviation``** : Deviation of the vehicle from the center of the line in meters.
107 |
108 | ### Highlighted Lane Line and Lane Line Information
109 |
110 | In order to easy work with images as well as videos, we have created a Python class called **`Line`** inside the **`advanced_lane_finding`** module. It encapsulates all the methods we described above and few more helper methods as well.
111 |
112 | The key method of **`Line`** class is **`process(self, image)`** method. It takes a single image as the input. That image goes through the image process pipeline as described above and finally produces another image which contains highlighted lane line, lane line curvature information and the content of the original image.
113 |
114 | The following section shows how we can use it with road images and videos.
115 |
116 | ```python
117 | src_image = mpimg.imread('../test_images/test5.jpg')
118 |
119 | line = advanced_lane_finding.Line()
120 | output_image = line.process(src_image)
121 |
122 | plt.figure(figsize=(10, 4))
123 | plt.axis('off')
124 | plt.imshow(output_image)
125 | plt.show()
126 | ```
127 |
128 |
129 |
130 |
131 | ```python
132 | output_file = '../processed_project_video.mp4'
133 | input_file = '../project_video.mp4'
134 | line = advanced_lane_finding.Line()
135 |
136 | clip = VideoFileClip(input_file)
137 | out_clip = clip.fl_image(line.process)
138 | out_clip.write_videofile(output_file, audio=False)
139 | ```
140 |
141 |
142 |
143 |
144 |
145 |
146 |
147 | ### Conclusions and Future Directions
148 |
149 | This was my very first computer vision problem. This project took relatively a large amount of time when it compares to other (deep learning) projects. The hyper-parameter tuning process in my computer vision pipeline was tedious and time-consuming. Unfortunately, our pipeline didn't generalize across diffrent road conditions and I think it is one of the main drawbacks of traditional computer vision approach to self-driving cars (and mobile robotics in general)
150 |
151 | When it comes to extensions and future directions, I would like to highlight followings.
152 |
153 | 1. As the very first step, in the future, I would like to improve my computer vision pipeline. Presently, it works with project video only. But, I would like to improve it in order to work with other videos as well.
154 | 2. Secondly, I would like to explore machine learning (both traditional and new deep learning) approaches suitable to address lane finding problem.
155 |
156 |
--------------------------------------------------------------------------------
/camera_cal/calibration1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration1.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration10.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration10.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration11.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration11.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration12.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration12.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration13.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration13.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration14.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration14.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration15.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration15.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration16.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration16.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration17.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration17.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration18.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration18.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration19.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration19.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration2.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration20.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration20.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration3.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration4.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration5.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration6.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration7.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration7.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration8.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration8.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration9.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/camera_cal/calibration9.jpg
--------------------------------------------------------------------------------
/challenge_video.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/challenge_video.mp4
--------------------------------------------------------------------------------
/example_writeup.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/example_writeup.pdf
--------------------------------------------------------------------------------
/examples/.ipynb_checkpoints/example-checkpoint.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [],
3 | "metadata": {},
4 | "nbformat": 4,
5 | "nbformat_minor": 1
6 | }
7 |
--------------------------------------------------------------------------------
/examples/binary_combo_example.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/examples/binary_combo_example.jpg
--------------------------------------------------------------------------------
/examples/color_fit_lines.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/examples/color_fit_lines.jpg
--------------------------------------------------------------------------------
/examples/example.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "## Advanced Lane Finding Project\n",
8 | "\n",
9 | "The goals / steps of this project are the following:\n",
10 | "\n",
11 | "* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.\n",
12 | "* Apply a distortion correction to raw images.\n",
13 | "* Use color transforms, gradients, etc., to create a thresholded binary image.\n",
14 | "* Apply a perspective transform to rectify binary image (\"birds-eye view\").\n",
15 | "* Detect lane pixels and fit to find the lane boundary.\n",
16 | "* Determine the curvature of the lane and vehicle position with respect to center.\n",
17 | "* Warp the detected lane boundaries back onto the original image.\n",
18 | "* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.\n",
19 | "\n",
20 | "---\n",
21 | "## First, I'll compute the camera calibration using chessboard images"
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": 3,
27 | "metadata": {
28 | "collapsed": true
29 | },
30 | "outputs": [],
31 | "source": [
32 | "import numpy as np\n",
33 | "import cv2\n",
34 | "import glob\n",
35 | "import matplotlib.pyplot as plt\n",
36 | "%matplotlib qt\n",
37 | "\n",
38 | "# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\n",
39 | "objp = np.zeros((6*9,3), np.float32)\n",
40 | "objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)\n",
41 | "\n",
42 | "# Arrays to store object points and image points from all the images.\n",
43 | "objpoints = [] # 3d points in real world space\n",
44 | "imgpoints = [] # 2d points in image plane.\n",
45 | "\n",
46 | "# Make a list of calibration images\n",
47 | "images = glob.glob('../camera_cal/calibration*.jpg')\n",
48 | "\n",
49 | "# Step through the list and search for chessboard corners\n",
50 | "for fname in images:\n",
51 | " img = cv2.imread(fname)\n",
52 | " gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n",
53 | "\n",
54 | " # Find the chessboard corners\n",
55 | " ret, corners = cv2.findChessboardCorners(gray, (9,6),None)\n",
56 | "\n",
57 | " # If found, add object points, image points\n",
58 | " if ret == True:\n",
59 | " objpoints.append(objp)\n",
60 | " imgpoints.append(corners)\n",
61 | "\n",
62 | " # Draw and display the corners\n",
63 | " img = cv2.drawChessboardCorners(img, (9,6), corners, ret)\n",
64 | " cv2.imshow('img',img)\n",
65 | " cv2.waitKey(500)\n",
66 | "\n",
67 | "cv2.destroyAllWindows()"
68 | ]
69 | },
70 | {
71 | "cell_type": "markdown",
72 | "metadata": {},
73 | "source": [
74 | "## And so on and so forth..."
75 | ]
76 | },
77 | {
78 | "cell_type": "code",
79 | "execution_count": null,
80 | "metadata": {
81 | "collapsed": true
82 | },
83 | "outputs": [],
84 | "source": []
85 | }
86 | ],
87 | "metadata": {
88 | "anaconda-cloud": {},
89 | "kernelspec": {
90 | "display_name": "Python [conda root]",
91 | "language": "python",
92 | "name": "conda-root-py"
93 | },
94 | "language_info": {
95 | "codemirror_mode": {
96 | "name": "ipython",
97 | "version": 3
98 | },
99 | "file_extension": ".py",
100 | "mimetype": "text/x-python",
101 | "name": "python",
102 | "nbconvert_exporter": "python",
103 | "pygments_lexer": "ipython3",
104 | "version": "3.5.2"
105 | }
106 | },
107 | "nbformat": 4,
108 | "nbformat_minor": 1
109 | }
110 |
--------------------------------------------------------------------------------
/examples/example.py:
--------------------------------------------------------------------------------
1 | def warper(img, src, dst):
2 |
3 | # Compute and apply perpective transform
4 | img_size = (img.shape[1], img.shape[0])
5 | M = cv2.getPerspectiveTransform(src, dst)
6 | warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_NEAREST) # keep same size as input image
7 |
8 | return warped
9 |
--------------------------------------------------------------------------------
/examples/example_output.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/examples/example_output.jpg
--------------------------------------------------------------------------------
/examples/undistort_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/examples/undistort_output.png
--------------------------------------------------------------------------------
/examples/warped_straight_lines.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/examples/warped_straight_lines.jpg
--------------------------------------------------------------------------------
/harder_challenge_video.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/harder_challenge_video.mp4
--------------------------------------------------------------------------------
/images/binarizer.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/binarizer.png
--------------------------------------------------------------------------------
/images/binary_input.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/binary_input.png
--------------------------------------------------------------------------------
/images/camera_calibrator.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/camera_calibrator.png
--------------------------------------------------------------------------------
/images/histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/histogram.png
--------------------------------------------------------------------------------
/images/image_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/image_output.png
--------------------------------------------------------------------------------
/images/lane_pixels.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/lane_pixels.png
--------------------------------------------------------------------------------
/images/pipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/pipeline.png
--------------------------------------------------------------------------------
/images/sample_ouput.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/sample_ouput.png
--------------------------------------------------------------------------------
/images/undistorted.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/undistorted.png
--------------------------------------------------------------------------------
/images/video_output.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/video_output.gif
--------------------------------------------------------------------------------
/images/warp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/images/warp.png
--------------------------------------------------------------------------------
/latex/pipeline.tex:
--------------------------------------------------------------------------------
1 | \documentclass[border=10pt,varwidth]{standalone}
2 | \usepackage{tikz}
3 | \usetikzlibrary{shapes,arrows,shapes.multipart, positioning}
4 |
5 | \begin{document}
6 |
7 | \begin{tikzpicture}
8 | \tikzset{
9 | node/.style = {rectangle, draw=magenta, fill=magenta!10, thin, rounded corners},
10 | line/.style = {draw=magenta, thin, -latex},
11 | output/.style = {circle, draw=magenta, fill=magenta!10, thin},
12 | }
13 |
14 | \node[node](input){\scriptsize Input: Image or Video};
15 |
16 | \node[node](camera)[below = 0.4cm of input]{\scriptsize Camera Calibrator};
17 |
18 | \node[node](shear)[below = 0.4cm of camera]{\scriptsize Warp Transformer};
19 |
20 | \node[node](crop)[below = 0.4cm of shear]{\scriptsize Binarizer};
21 |
22 | \node[node](flip)[below = 0.4cm of crop]{\scriptsize Lane Line Extractor};
23 |
24 | \node[node](gamma)[below = 0.4cm of flip]{\scriptsize Lane Line Curvature Calculator};
25 |
26 | \node[node](resize)[below = 0.4cm of gamma]{\scriptsize Output: Image or Video with Highlighted Lane Line and Lane Line Information};
27 |
28 | \path [line] (input) -- (camera);
29 | \path [line] (camera) -- (shear);
30 | \path [line] (shear) -- (crop);
31 | \path [line] (crop) -- (flip);
32 | \path [line] (flip) -- (gamma);
33 | \path [line] (gamma) -- (resize);
34 |
35 |
36 | \end{tikzpicture}
37 |
38 | \end{document}
39 |
--------------------------------------------------------------------------------
/output_images/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/.DS_Store
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration1.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration10.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration10.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration11.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration11.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration12.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration12.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration13.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration13.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration14.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration14.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration15.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration15.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration16.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration16.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration17.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration17.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration18.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration18.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration19.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration19.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration2.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration20.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration20.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration3.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration4.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration5.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration6.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration7.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration7.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration8.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration8.jpg
--------------------------------------------------------------------------------
/output_images/camera_cal/calibration9.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/output_images/camera_cal/calibration9.jpg
--------------------------------------------------------------------------------
/output_images/save_output_here.txt:
--------------------------------------------------------------------------------
1 | Please save your output images to this folder and include a description in your README of what each image shows.
--------------------------------------------------------------------------------
/processed_project_video.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/processed_project_video.mp4
--------------------------------------------------------------------------------
/project_video.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/project_video.mp4
--------------------------------------------------------------------------------
/src/advanced_lane_finding.py:
--------------------------------------------------------------------------------
1 | import glob
2 | import os
3 | import pickle
4 |
5 | import cv2
6 | import numpy as np
7 |
8 | # we store camera calibration parameters in the following file
9 | CAMERA_CALIBRATION_COEFFICIENTS_FILE = '../camera_cal/calibrated_data.p'
10 |
11 |
12 | class CameraCalibrator:
13 | def __init__(self, calibration_images, no_corners_x_dir, no_corners_y_dir,
14 | use_existing_camera_coefficients=True):
15 | """
16 |
17 | This class encapsulates camera calibration process. When creating an instance of
18 | CameraCalibrator class, if use_existing_camera_coefficients is False, __calibrate()
19 | method is called and save camera calibration coefficients.
20 |
21 | :param calibration_images:
22 | The list of image used for camera calibration
23 |
24 | :param no_corners_x_dir:
25 | The number of horizontal corners in calibration images
26 |
27 | :param no_corners_y_dir:
28 | The number of vertical corners in calibration images
29 |
30 | """
31 | self.calibration_images = calibration_images
32 | self.no_corners_x_dir = no_corners_x_dir
33 | self.no_corners_y_dir = no_corners_y_dir
34 | self.object_points = []
35 | self.image_points = []
36 |
37 | if not use_existing_camera_coefficients:
38 | self._calibrate()
39 |
40 | def _calibrate(self):
41 | """
42 |
43 | :return:
44 | Camera calibration coefficients as a python dictionary
45 | """
46 | object_point = np.zeros((self.no_corners_x_dir * self.no_corners_y_dir, 3), np.float32)
47 | object_point[:, :2] = np.mgrid[0:self.no_corners_x_dir, 0:self.no_corners_y_dir].T.reshape(-1, 2)
48 |
49 | for idx, file_name in enumerate(self.calibration_images):
50 | image = cv2.imread(file_name)
51 | gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
52 | ret, corners = cv2.findChessboardCorners(gray_image,
53 | (self.no_corners_x_dir, self.no_corners_y_dir),
54 | None)
55 | if ret:
56 | self.object_points.append(object_point)
57 | self.image_points.append(corners)
58 |
59 | image_size = (image.shape[1], image.shape[0])
60 | ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(self.object_points,
61 | self.image_points, image_size, None, None)
62 | calibrated_data = {'mtx': mtx, 'dist': dist}
63 |
64 | with open(CAMERA_CALIBRATION_COEFFICIENTS_FILE, 'wb') as f:
65 | pickle.dump(calibrated_data, file=f)
66 |
67 | def undistort(self, image):
68 | """
69 |
70 | :param image:
71 | :return:
72 | """
73 |
74 | if not os.path.exists(CAMERA_CALIBRATION_COEFFICIENTS_FILE):
75 | raise Exception('Camera calibration data file does not exist at ' +
76 | CAMERA_CALIBRATION_COEFFICIENTS_FILE)
77 |
78 | with open(CAMERA_CALIBRATION_COEFFICIENTS_FILE, 'rb') as f:
79 | calibrated_data = pickle.load(file=f)
80 |
81 | # image = cv2.imread(image)
82 | return cv2.undistort(image, calibrated_data['mtx'], calibrated_data['dist'],
83 | None, calibrated_data['mtx'])
84 |
85 |
86 | class PerspectiveTransformer:
87 | def __init__(self, src_points, dest_points):
88 | """
89 |
90 | :param src_points:
91 | :param dest_points:
92 | """
93 | self.src_points = src_points
94 | self.dest_points = dest_points
95 |
96 | self.M = cv2.getPerspectiveTransform(self.src_points, self.dest_points)
97 | self.M_inverse = cv2.getPerspectiveTransform(self.dest_points, self.src_points)
98 |
99 | def transform(self, image):
100 | """
101 |
102 | :param image:
103 | :return:
104 | """
105 | size = (image.shape[1], image.shape[0])
106 | return cv2.warpPerspective(image, self.M, size, flags=cv2.INTER_LINEAR)
107 |
108 | def inverse_transform(self, src_image):
109 | """
110 |
111 | :param src_image:
112 | :return:
113 | """
114 | size = (src_image.shape[1], src_image.shape[0])
115 | return cv2.warpPerspective(src_image, self.M_inverse, size, flags=cv2.INTER_LINEAR)
116 |
117 |
118 | def noise_reduction(image, threshold=4):
119 | """
120 | This method is used to reduce the noise of binary images.
121 |
122 | :param image:
123 | binary image (0 or 1)
124 |
125 | :param threshold:
126 | min number of neighbours with value
127 |
128 | :return:
129 | """
130 | k = np.array([[1, 1, 1],
131 | [1, 0, 1],
132 | [1, 1, 1]])
133 | nb_neighbours = cv2.filter2D(image, ddepth=-1, kernel=k)
134 | image[nb_neighbours < threshold] = 0
135 | return image
136 |
137 |
138 | def binarize(image, gray_thresh=(20, 255), s_thresh=(170, 255), l_thresh=(30, 255), sobel_kernel=3):
139 | """
140 | This method extracts lane line pixels from road an image. Then create a minarized image where
141 | lane lines are marked in white color and rest of the image is marked in black color.
142 |
143 | :param image:
144 | Source image
145 |
146 | :param gray_thresh:
147 | Minimum and maximum gray color threshold
148 |
149 | :param s_thresh:
150 | This tuple contains the minimum and maximum S color threshold in HLS color scheme
151 |
152 | :param l_thresh:
153 | Minimum and maximum L color (after converting image to HLS color scheme)
154 | threshold allowed in the source image
155 |
156 | :param sobel_kernel:
157 | Size of the kernel use by the Sobel operation.
158 |
159 | :return:
160 | The binarized image where lane line pixels are marked in while color and rest of the image
161 | is marked in block color.
162 | """
163 |
164 | # first we take a copy of the source iamge
165 | image_copy = np.copy(image)
166 |
167 | # convert RGB image to HLS color space.
168 | # HLS more reliable when it comes to find out lane lines
169 | hls = cv2.cvtColor(image_copy, cv2.COLOR_RGB2HLS)
170 | s_channel = hls[:, :, 2]
171 | l_channel = hls[:, :, 1]
172 |
173 | # Next, we apply Sobel operator in X direction and calculate scaled derivatives.
174 | sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
175 | abs_sobelx = np.absolute(sobelx)
176 | scaled_sobel = np.uint8(255 * abs_sobelx / np.max(abs_sobelx))
177 |
178 | # Next, we generate a binary image based on gray_thresh values.
179 | thresh_min = gray_thresh[0]
180 | thresh_max = gray_thresh[1]
181 | sobel_x_binary = np.zeros_like(scaled_sobel)
182 | sobel_x_binary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
183 |
184 | # Next, we generated a binary image using S component of our HLS color scheme and
185 | # provided S threshold
186 | s_binary = np.zeros_like(s_channel)
187 | s_thresh_min = s_thresh[0]
188 | s_thresh_max = s_thresh[1]
189 | s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1
190 |
191 | # Next, we generated a binary image using S component of our HLS color scheme and
192 | # provided S threshold
193 | l_binary = np.zeros_like(l_channel)
194 | l_thresh_min = l_thresh[0]
195 | l_thresh_max = l_thresh[1]
196 | l_binary[(l_channel >= l_thresh_min) & (l_channel <= l_thresh_max)] = 1
197 |
198 | # finally, return the combined binary image
199 | binary = np.zeros_like(sobel_x_binary)
200 | binary[((l_binary == 1) & (s_binary == 1) | (sobel_x_binary == 1))] = 1
201 | binary = 255 * np.dstack((binary, binary, binary)).astype('uint8')
202 |
203 | return noise_reduction(binary)
204 |
205 |
206 | # Define a class to receive the characteristics of each line detection
207 | class Line():
208 | def __init__(self):
209 | """"""
210 |
211 | self.detected = False
212 |
213 | self.left_fit = None
214 | self.right_fit = None
215 |
216 | self.MAX_BUFFER_SIZE = 12
217 |
218 | self.buffer_index = 0
219 | self.iter_counter = 0
220 |
221 | self.buffer_left = np.zeros((self.MAX_BUFFER_SIZE, 720))
222 | self.buffer_right = np.zeros((self.MAX_BUFFER_SIZE, 720))
223 |
224 | self.perspective = self._build_perspective_transformer()
225 | self.calibrator = self._build_camera_calibrator()
226 |
227 | @staticmethod
228 | def _build_perspective_transformer():
229 | """
230 |
231 | :return:
232 | """
233 | corners = np.float32([[253, 697], [585, 456], [700, 456], [1061, 690]])
234 | new_top_left = np.array([corners[0, 0], 0])
235 | new_top_right = np.array([corners[3, 0], 0])
236 | offset = [50, 0]
237 |
238 | src = np.float32([corners[0], corners[1], corners[2], corners[3]])
239 | dst = np.float32([corners[0] + offset, new_top_left + offset, new_top_right - offset, corners[3] - offset])
240 |
241 | perspective = PerspectiveTransformer(src, dst)
242 | return perspective
243 |
244 | @staticmethod
245 | def _build_camera_calibrator():
246 | """
247 |
248 | :return:
249 | """
250 | calibration_images = glob.glob('../camera_cal/calibration*.jpg')
251 | calibrator = CameraCalibrator(calibration_images,
252 | 9, 6, use_existing_camera_coefficients=True)
253 | return calibrator
254 |
255 | def naive_lane_extractor(self, binary_warped):
256 | """
257 |
258 | :param binary_warped:
259 | :return:
260 | """
261 | histogram = np.sum(binary_warped[binary_warped.shape[0] / 2:, :, 0], axis=0)
262 |
263 | # get midpoint of the histogram
264 | midpoint = np.int(histogram.shape[0] / 2)
265 |
266 | # get left and right halves of the histogram
267 | leftx_base = np.argmax(histogram[:midpoint])
268 | rightx_base = np.argmax(histogram[midpoint:]) + midpoint
269 |
270 | # based on number of events, we calculate hight of a window
271 | nwindows = 9
272 | window_height = np.int(binary_warped.shape[0] / nwindows)
273 |
274 | # Extracts x and y coordinates of non-zero pixels
275 | nonzero = binary_warped.nonzero()
276 | nonzeroy = np.array(nonzero[0])
277 | nonzerox = np.array(nonzero[1])
278 |
279 | # Set current x coordinated for left and right
280 | leftx_current = leftx_base
281 | rightx_current = rightx_base
282 |
283 | margin = 75
284 | min_num_pixels = 35
285 |
286 | # save pixel ids in these two lists
287 | left_lane_inds = []
288 | right_lane_inds = []
289 |
290 | for window in range(nwindows):
291 | win_y_low = binary_warped.shape[0] - (window + 1) * window_height
292 | win_y_high = binary_warped.shape[0] - window * window_height
293 |
294 | win_xleft_low = leftx_current - margin
295 | win_xleft_high = leftx_current + margin
296 | win_xright_low = rightx_current - margin
297 | win_xright_high = rightx_current + margin
298 |
299 | good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (
300 | nonzerox < win_xleft_high)).nonzero()[0]
301 | good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (
302 | nonzerox < win_xright_high)).nonzero()[0]
303 |
304 | # Append these indices to the lists
305 | left_lane_inds.append(good_left_inds)
306 | right_lane_inds.append(good_right_inds)
307 |
308 | if len(good_left_inds) > min_num_pixels:
309 | leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
310 | if len(good_right_inds) > min_num_pixels:
311 | rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
312 |
313 | # Concatenate the arrays of indices
314 | left_lane_inds = np.concatenate(left_lane_inds)
315 | right_lane_inds = np.concatenate(right_lane_inds)
316 |
317 | # Extract left and right line pixel positions
318 | leftx = nonzerox[left_lane_inds]
319 | lefty = nonzeroy[left_lane_inds]
320 | rightx = nonzerox[right_lane_inds]
321 | righty = nonzeroy[right_lane_inds]
322 |
323 | # Fit a second order polynomial to each
324 | self.left_fit = np.polyfit(lefty, leftx, 2)
325 | self.right_fit = np.polyfit(righty, rightx, 2)
326 |
327 | fity = np.linspace(0, binary_warped.shape[0] - 1, binary_warped.shape[0])
328 | fit_leftx = self.left_fit[0] * fity ** 2 + self.left_fit[1] * fity + self.left_fit[2]
329 | fit_rightx = self.right_fit[0] * fity ** 2 + self.right_fit[1] * fity + self.right_fit[2]
330 |
331 | self.detected = True
332 |
333 | return fit_leftx, fit_rightx
334 |
335 | def smart_lane_extractor(self, binary_warped):
336 | """
337 |
338 | :param binary_warped:
339 | :return:
340 | """
341 | # Assume you now have a new warped binary image
342 | # from the next frame of video (also called "binary_warped")
343 | # It's now much easier to find line pixels!
344 | nonzero = binary_warped.nonzero()
345 | nonzeroy = np.array(nonzero[0])
346 | nonzerox = np.array(nonzero[1])
347 |
348 | margin = 75
349 |
350 | left_lane_inds = (
351 | (nonzerox > (
352 | self.left_fit[0] * (nonzeroy ** 2) + self.left_fit[1] * nonzeroy + self.left_fit[2] - margin)) & (
353 | nonzerox < (
354 | self.left_fit[0] * (nonzeroy ** 2) + self.left_fit[1] * nonzeroy + self.left_fit[2] + margin)))
355 | right_lane_inds = (
356 | (nonzerox > (
357 | self.right_fit[0] * (nonzeroy ** 2) + self.right_fit[1] * nonzeroy + self.right_fit[2] - margin)) & (
358 | nonzerox < (
359 | self.right_fit[0] * (nonzeroy ** 2) + self.right_fit[1] * nonzeroy + self.right_fit[2] + margin)))
360 |
361 | # Again, extract left and right line pixel positions
362 | leftx = nonzerox[left_lane_inds]
363 | lefty = nonzeroy[left_lane_inds]
364 | rightx = nonzerox[right_lane_inds]
365 | righty = nonzeroy[right_lane_inds]
366 | # Fit a second order polynomial to each
367 | self.left_fit = np.polyfit(lefty, leftx, 2)
368 | self.right_fit = np.polyfit(righty, rightx, 2)
369 | # Generate x and y values for plotting
370 | ploty = np.linspace(0, binary_warped.shape[0] - 1, binary_warped.shape[0])
371 | left_fitx = self.left_fit[0] * ploty ** 2 + self.left_fit[1] * ploty + self.left_fit[2]
372 | right_fitx = self.right_fit[0] * ploty ** 2 + self.right_fit[1] * ploty + self.right_fit[2]
373 |
374 | return left_fitx, right_fitx
375 |
376 | def calculate_road_info(self, image_size, left_x, right_x):
377 | """
378 | This method calculates left and right road curvature and off of the vehicle from the center
379 | of the lane
380 |
381 | :param image_size:
382 | Size of the image
383 |
384 | :param left_x:
385 | X coordinated of left lane pixels
386 |
387 | :param right_x:
388 | X coordinated of right lane pixels
389 |
390 | :return:
391 | Left and right curvatures of the lane and off of the vehicle from the center of the lane
392 | """
393 | # first we calculate the intercept points at the bottom of our image
394 | left_intercept = self.left_fit[0] * image_size[0] ** 2 + self.left_fit[1] * image_size[0] + self.left_fit[2]
395 | right_intercept = self.right_fit[0] * image_size[0] ** 2 + self.right_fit[1] * image_size[0] + self.right_fit[2]
396 |
397 | # Next take the difference in pixels between left and right interceptor points
398 | road_width_in_pixels = right_intercept - left_intercept
399 | assert road_width_in_pixels > 0, 'Road width in pixel can not be negative'
400 |
401 | # Since average highway lane line width in US is about 3.7m
402 | # Source: https://en.wikipedia.org/wiki/Lane#Lane_width
403 | # we calculate length per pixel in meters
404 | meters_per_pixel_x_dir = 3.7 / road_width_in_pixels
405 | meters_per_pixel_y_dir = 30 / road_width_in_pixels
406 |
407 | # Recalculate road curvature in X-Y space
408 | ploty = np.linspace(0, 719, num=720)
409 | y_eval = np.max(ploty)
410 |
411 | # Fit new polynomials to x,y in world space
412 | left_fit_cr = np.polyfit(ploty * meters_per_pixel_y_dir, left_x * meters_per_pixel_x_dir, 2)
413 | right_fit_cr = np.polyfit(ploty * meters_per_pixel_y_dir, right_x * meters_per_pixel_x_dir, 2)
414 |
415 | # Calculate the new radii of curvature
416 | left_curverad = ((1 + (2 * left_fit_cr[0] * y_eval * meters_per_pixel_y_dir + left_fit_cr[1]) ** 2) ** 1.5) / \
417 | np.absolute(2 * left_fit_cr[0])
418 |
419 | right_curverad = ((1 + (2 * right_fit_cr[0] * y_eval * meters_per_pixel_y_dir + right_fit_cr[1]) ** 2) ** 1.5) / \
420 | np.absolute(2 * right_fit_cr[0])
421 |
422 | # Next, we can lane deviation
423 | calculated_center = (left_intercept + right_intercept) / 2.0
424 | lane_deviation = (calculated_center - image_size[1] / 2.0) * meters_per_pixel_x_dir
425 |
426 | return left_curverad, right_curverad, lane_deviation
427 |
428 | @staticmethod
429 | def fill_lane_lines(image, fit_left_x, fit_right_x):
430 | """
431 | This utility method highlights correct lane section on the road
432 |
433 | :param image:
434 | On top of this image, my lane will be highlighted
435 |
436 | :param fit_left_x:
437 | X coordinated of the left second order polynomial
438 |
439 | :param fit_right_x:
440 | X coordinated of the right second order polynomial
441 |
442 | :return:
443 | The input image with highlighted lane line.
444 | """
445 | copy_image = np.zeros_like(image)
446 | fit_y = np.linspace(0, copy_image.shape[0] - 1, copy_image.shape[0])
447 |
448 | pts_left = np.array([np.transpose(np.vstack([fit_left_x, fit_y]))])
449 | pts_right = np.array([np.flipud(np.transpose(np.vstack([fit_right_x, fit_y])))])
450 | pts = np.hstack((pts_left, pts_right))
451 |
452 | cv2.fillPoly(copy_image, np.int_([pts]), (0, 255, 0))
453 |
454 | return copy_image
455 |
456 | def merge_images(self, binary_img, src_image):
457 | """
458 | This utility method merges merges two images
459 |
460 | :param binary_img:
461 | Binary image with highlighted lane segment.
462 |
463 | :param src_image:
464 | The original image on top of it we are going to highlight lane segment.
465 |
466 | :return:
467 | The Original image with highlighted lane segment.
468 | """
469 | copy_binary = np.copy(binary_img)
470 | copy_src_img = np.copy(src_image)
471 |
472 | copy_binary_pers = self.perspective.inverse_transform(copy_binary)
473 | result = cv2.addWeighted(copy_src_img, 1, copy_binary_pers, 0.3, 0)
474 |
475 | return result
476 |
477 | def process(self, image):
478 | """
479 | This method takes an image as an input and produces an image with
480 | 1. Highlighted lane line
481 | 2. Left and right lane curvatures (in meters)
482 | 3. Vehicle offset of the center of the lane (in meters)
483 |
484 | :param image:
485 | Source image
486 |
487 | :return:
488 | Annotated image with lane line details
489 | """
490 | image = np.copy(image)
491 | undistorted_image = self.calibrator.undistort(image)
492 | warped_image = self.perspective.transform(undistorted_image)
493 | binary_image = binarize(warped_image)
494 |
495 | if self.detected:
496 | fit_leftx, fit_rightx = self.smart_lane_extractor(binary_image)
497 | else:
498 | fit_leftx, fit_rightx = self.naive_lane_extractor(binary_image)
499 |
500 | self.buffer_left[self.buffer_index] = fit_leftx
501 | self.buffer_right[self.buffer_index] = fit_rightx
502 |
503 | self.buffer_index += 1
504 | self.buffer_index %= self.MAX_BUFFER_SIZE
505 |
506 | if self.iter_counter < self.MAX_BUFFER_SIZE:
507 | self.iter_counter += 1
508 | ave_left = np.sum(self.buffer_left, axis=0) / self.iter_counter
509 | ave_right = np.sum(self.buffer_right, axis=0) / self.iter_counter
510 | else:
511 | ave_left = np.average(self.buffer_left, axis=0)
512 | ave_right = np.average(self.buffer_right, axis=0)
513 |
514 | left_curvature, right_curvature, calculated_deviation = self.calculate_road_info(image.shape, ave_left,
515 | ave_right)
516 | curvature_text = 'Left Curvature: {:.2f} m Right Curvature: {:.2f} m'.format(left_curvature, right_curvature)
517 |
518 | font = cv2.FONT_HERSHEY_SIMPLEX
519 | cv2.putText(image, curvature_text, (100, 50), font, 1, (221, 28, 119), 2)
520 |
521 | deviation_info = 'Lane Deviation: {:.3f} m'.format(calculated_deviation)
522 | cv2.putText(image, deviation_info, (100, 90), font, 1, (221, 28, 119), 2)
523 |
524 | filled_image = self.fill_lane_lines(binary_image, ave_left, ave_right)
525 |
526 | merged_image = self.merge_images(filled_image, image)
527 |
528 | return merged_image
529 |
530 |
531 | if __name__ == '__main__':
532 | from moviepy.editor import VideoFileClip
533 |
534 | line = Line()
535 | output_file = '../processed_project_video.mp4'
536 | input_file = '../project_video.mp4'
537 | clip = VideoFileClip(input_file)
538 | out_clip = clip.fl_image(line.process)
539 | out_clip.write_videofile(output_file, audio=False)
540 |
--------------------------------------------------------------------------------
/test_images/straight_lines1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/test_images/straight_lines1.jpg
--------------------------------------------------------------------------------
/test_images/straight_lines2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/test_images/straight_lines2.jpg
--------------------------------------------------------------------------------
/test_images/test1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/test_images/test1.jpg
--------------------------------------------------------------------------------
/test_images/test2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/test_images/test2.jpg
--------------------------------------------------------------------------------
/test_images/test3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/test_images/test3.jpg
--------------------------------------------------------------------------------
/test_images/test4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/test_images/test4.jpg
--------------------------------------------------------------------------------
/test_images/test5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/test_images/test5.jpg
--------------------------------------------------------------------------------
/test_images/test6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upul/CarND-Advanced-Lane-Lines/08d1410c7861596645379748ed0ebc8f9f376b60/test_images/test6.jpg
--------------------------------------------------------------------------------
/writeup_template.md:
--------------------------------------------------------------------------------
1 | ##Writeup Template
2 | ###You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer.
3 |
4 | ---
5 |
6 | **Advanced Lane Finding Project**
7 |
8 | The goals / steps of this project are the following:
9 |
10 | * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
11 | * Apply a distortion correction to raw images.
12 | * Use color transforms, gradients, etc., to create a thresholded binary image.
13 | * Apply a perspective transform to rectify binary image ("birds-eye view").
14 | * Detect lane pixels and fit to find the lane boundary.
15 | * Determine the curvature of the lane and vehicle position with respect to center.
16 | * Warp the detected lane boundaries back onto the original image.
17 | * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
18 |
19 | [//]: # (Image References)
20 |
21 | [image1]: ./examples/undistort_output.png "Undistorted"
22 | [image2]: ./test_images/test1.jpg "Road Transformed"
23 | [image3]: ./examples/binary_combo_example.jpg "Binary Example"
24 | [image4]: ./examples/warped_straight_lines.jpg "Warp Example"
25 | [image5]: ./examples/color_fit_lines.jpg "Fit Visual"
26 | [image6]: ./examples/example_output.jpg "Output"
27 | [video1]: ./project_video.mp4 "Video"
28 |
29 | ## [Rubric](https://review.udacity.com/#!/rubrics/571/view) Points
30 | ###Here I will consider the rubric points individually and describe how I addressed each point in my implementation.
31 |
32 | ---
33 | ###Writeup / README
34 |
35 | ####1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. [Here](https://github.com/udacity/CarND-Advanced-Lane-Lines/blob/master/writeup_template.md) is a template writeup for this project you can use as a guide and a starting point.
36 |
37 | You're reading it!
38 | ###Camera Calibration
39 |
40 | ####1. Briefly state how you computed the camera matrix and distortion coefficients. Provide an example of a distortion corrected calibration image.
41 |
42 | The code for this step is contained in the first code cell of the IPython notebook located in "./examples/example.ipynb" (or in lines # through # of the file called `some_file.py`).
43 |
44 | I start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, `objp` is just a replicated array of coordinates, and `objpoints` will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. `imgpoints` will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection.
45 |
46 | I then used the output `objpoints` and `imgpoints` to compute the camera calibration and distortion coefficients using the `cv2.calibrateCamera()` function. I applied this distortion correction to the test image using the `cv2.undistort()` function and obtained this result:
47 |
48 | ![alt text][image1]
49 |
50 | ###Pipeline (single images)
51 |
52 | ####1. Provide an example of a distortion-corrected image.
53 | To demonstrate this step, I will describe how I apply the distortion correction to one of the test images like this one:
54 | ![alt text][image2]
55 | ####2. Describe how (and identify where in your code) you used color transforms, gradients or other methods to create a thresholded binary image. Provide an example of a binary image result.
56 | I used a combination of color and gradient thresholds to generate a binary image (thresholding steps at lines # through # in `another_file.py`). Here's an example of my output for this step. (note: this is not actually from one of the test images)
57 |
58 | ![alt text][image3]
59 |
60 | ####3. Describe how (and identify where in your code) you performed a perspective transform and provide an example of a transformed image.
61 |
62 | The code for my perspective transform includes a function called `warper()`, which appears in lines 1 through 8 in the file `example.py` (output_images/examples/example.py) (or, for example, in the 3rd code cell of the IPython notebook). The `warper()` function takes as inputs an image (`img`), as well as source (`src`) and destination (`dst`) points. I chose the hardcode the source and destination points in the following manner:
63 |
64 | ```
65 | src = np.float32(
66 | [[(img_size[0] / 2) - 55, img_size[1] / 2 + 100],
67 | [((img_size[0] / 6) - 10), img_size[1]],
68 | [(img_size[0] * 5 / 6) + 60, img_size[1]],
69 | [(img_size[0] / 2 + 55), img_size[1] / 2 + 100]])
70 | dst = np.float32(
71 | [[(img_size[0] / 4), 0],
72 | [(img_size[0] / 4), img_size[1]],
73 | [(img_size[0] * 3 / 4), img_size[1]],
74 | [(img_size[0] * 3 / 4), 0]])
75 |
76 | ```
77 | This resulted in the following source and destination points:
78 |
79 | | Source | Destination |
80 | |:-------------:|:-------------:|
81 | | 585, 460 | 320, 0 |
82 | | 203, 720 | 320, 720 |
83 | | 1127, 720 | 960, 720 |
84 | | 695, 460 | 960, 0 |
85 |
86 | I verified that my perspective transform was working as expected by drawing the `src` and `dst` points onto a test image and its warped counterpart to verify that the lines appear parallel in the warped image.
87 |
88 | ![alt text][image4]
89 |
90 | ####4. Describe how (and identify where in your code) you identified lane-line pixels and fit their positions with a polynomial?
91 |
92 | Then I did some other stuff and fit my lane lines with a 2nd order polynomial kinda like this:
93 |
94 | ![alt text][image5]
95 |
96 | ####5. Describe how (and identify where in your code) you calculated the radius of curvature of the lane and the position of the vehicle with respect to center.
97 |
98 | I did this in lines # through # in my code in `my_other_file.py`
99 |
100 | ####6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly.
101 |
102 | I implemented this step in lines # through # in my code in `yet_another_file.py` in the function `map_lane()`. Here is an example of my result on a test image:
103 |
104 | ![alt text][image6]
105 |
106 | ---
107 |
108 | ###Pipeline (video)
109 |
110 | ####1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!).
111 |
112 | Here's a [link to my video result](./project_video.mp4)
113 |
114 | ---
115 |
116 | ###Discussion
117 |
118 | ####1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?
119 |
120 | Here I'll talk about the approach I took, what techniques I used, what worked and why, where the pipeline might fail and how I might improve it if I were going to pursue this project further.
121 |
122 |
--------------------------------------------------------------------------------