├── .gitignore
├── LICENSE
├── README.md
├── camera_cal
├── calibration10.jpg
├── calibration11.jpg
├── calibration12.jpg
├── calibration13.jpg
├── calibration14.jpg
├── calibration15.jpg
├── calibration16.jpg
├── calibration17.jpg
├── calibration18.jpg
├── calibration19.jpg
├── calibration2.jpg
├── calibration20.jpg
├── calibration3.jpg
├── calibration4.jpg
├── calibration5.jpg
├── calibration6.jpg
├── calibration7.jpg
├── calibration8.jpg
├── calibration9.jpg
└── test_image.jpg
├── examples
├── binary_combo_example.jpg
├── color_fit_lines.jpg
├── example.ipynb
├── example.py
├── example_output.jpg
├── undistort_output.png
└── warped_straight_lines.jpg
├── output_images
├── Line_detection_advanced.py
├── calib_info.npz
├── camera_calibration.py
├── for_readme
│ ├── R_binary.png
│ ├── R_curve_formula.png
│ ├── binary_warped_window_pixel.png
│ ├── binary_warped_window_pixel_line.png
│ ├── chessboard_undistorted.png
│ ├── cmbined_binary.png
│ ├── histogram.png
│ ├── road_lane.png
│ ├── road_rectangale.png
│ ├── road_rectangale_warped.png
│ ├── road_undistorted.png
│ ├── road_window.png
│ ├── s_binary.png
│ └── sx_binary.png
└── save_output_here.txt
└── test_images
├── straight_lines1.jpg
├── straight_lines2.jpg
├── test1.jpg
├── test2.jpg
├── test3.jpg
├── test4.jpg
├── test5.jpg
├── test6.jpg
├── test7.jpg
├── test8.jpg
└── test9.jpg
/.gitignore:
--------------------------------------------------------------------------------
1 | *.mp4
2 |
3 | # Byte-compiled / optimized / DLL files
4 | __pycache__/
5 | *.py[cod]
6 | *$py.class
7 |
8 | # C extensions
9 | *.so
10 |
11 | # Distribution / packaging
12 | .Python
13 | build/
14 | develop-eggs/
15 | dist/
16 | downloads/
17 | eggs/
18 | .eggs/
19 | lib/
20 | lib64/
21 | parts/
22 | sdist/
23 | var/
24 | wheels/
25 | pip-wheel-metadata/
26 | share/python-wheels/
27 | *.egg-info/
28 | .installed.cfg
29 | *.egg
30 | MANIFEST
31 |
32 | # PyInstaller
33 | # Usually these files are written by a python script from a template
34 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
35 | *.manifest
36 | *.spec
37 |
38 | # Installer logs
39 | pip-log.txt
40 | pip-delete-this-directory.txt
41 |
42 | # Unit test / coverage reports
43 | htmlcov/
44 | .tox/
45 | .nox/
46 | .coverage
47 | .coverage.*
48 | .cache
49 | nosetests.xml
50 | coverage.xml
51 | *.cover
52 | *.py,cover
53 | .hypothesis/
54 | .pytest_cache/
55 |
56 | # Translations
57 | *.mo
58 | *.pot
59 |
60 | # Django stuff:
61 | *.log
62 | local_settings.py
63 | db.sqlite3
64 | db.sqlite3-journal
65 |
66 | # Flask stuff:
67 | instance/
68 | .webassets-cache
69 |
70 | # Scrapy stuff:
71 | .scrapy
72 |
73 | # Sphinx documentation
74 | docs/_build/
75 |
76 | # PyBuilder
77 | target/
78 |
79 | # Jupyter Notebook
80 | .ipynb_checkpoints
81 |
82 | # IPython
83 | profile_default/
84 | ipython_config.py
85 |
86 | # pyenv
87 | .python-version
88 |
89 | # pipenv
90 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
91 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
92 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
93 | # install all needed dependencies.
94 | #Pipfile.lock
95 |
96 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
97 | __pypackages__/
98 |
99 | # Celery stuff
100 | celerybeat-schedule
101 | celerybeat.pid
102 |
103 | # SageMath parsed files
104 | *.sage.py
105 |
106 | # Environments
107 | .env
108 | .venv
109 | env/
110 | venv/
111 | ENV/
112 | env.bak/
113 | venv.bak/
114 |
115 | # Spyder project settings
116 | .spyderproject
117 | .spyproject
118 |
119 | # Rope project settings
120 | .ropeproject
121 |
122 | # mkdocs documentation
123 | /site
124 |
125 | # mypy
126 | .mypy_cache/
127 | .dmypy.json
128 | dmypy.json
129 |
130 | # Pyre type checker
131 | .pyre/
132 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2016-2018 Udacity, Inc.
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | **Advanced Lane Finding Project**
2 |
3 | The goals / steps of this project are the following:
4 |
5 | * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
6 | * Apply a distortion correction to raw images.
7 | * Use color transforms, gradients, etc., to create a thresholded binary image.
8 | * Apply a perspective transform to rectify binary image ("birds-eye view").
9 | * Detect lane pixels and fit to find the lane boundary.
10 | * (To optimize: Only for videos after analyzing the first image) Detect lane pixels around the detected line of the previous image.
11 | * Determine the curvature of the lane and vehicle position with respect to the center.
12 | * Warp the detected lane boundaries back onto the original image.
13 | * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
14 |
15 |
16 | [video1]: ./project_video.mp4 "Video"
17 |
18 |
19 |
20 | ---
21 | ### Camera Calibration
22 |
23 | #### 1. Computation of the camera matrix and distortion coefficients with an example of distortion corrected calibration image.
24 |
25 | The code for this step is called `Camera_calibration.py`.
26 |
27 | I start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world.
28 | Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image.
29 | Thus, `objp` is just a replicated array of coordinates, and `objpoints` will be appended with a copy of it every time I successfully detect all chessboard corners in a test image.
30 | `imgpoints` will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection.
31 |
32 | I then used the output `objpoints` and `imgpoints` to compute the camera calibration and distortion coefficients using the `cv2.calibrateCamera()` function.
33 | I also save these two matrixes using `np.savez` such that I can use them later.
34 | Then, I applied this distortion correction to the test image using the `cv2.undistort()` function and obtained this result:
35 |
36 | [image0]: ./camera_cal/test_image.jpg "distorted"
37 | [image1]: ./output_images/for_readme/chessboard_undistorted.png "Undistorted_board"
38 |
39 | Original image | undistorted image
40 | :-------------------------:|:-------------------------:
41 | ![alt text][image0] | ![alt text][image1]
42 |
43 | ### Pipeline (single images)
44 |
45 | The code for this step is called `Line_detection_advanced.py`.
46 | Initially, it loads `mtx`, and `dist` matrices from the camera calibration step.
47 |
48 | #### 1. Apply a distortion correction to raw images.
49 |
50 | Using the saved `mtx`, `dist` from calibration, I have undistorted an image from a road:
51 |
52 | [image10]: ./test_images/straight_lines1.jpg
53 | [image11]: ./output_images/for_readme/road_undistorted.png
54 |
55 | Original image | undistorted image
56 | :-------------------------:|:-------------------------:
57 | ![alt text][image10] | ![alt text][image11]
58 |
59 |
60 | #### 2. Create a thresholded binary image using color transforms and gradients.
61 |
62 | (TODO: Which line of code?)
63 |
64 | I used a combination of color and gradient thresholds to generate a binary image (thresholding steps at lines 30 through 53 in `Line_detection_advanced.py`).
65 |
66 | For gradient thresholds, the code includes a function called `grad_thresh`.
67 | First, I converted the image into grayscale `cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)`. Note that if you are using `cv2.imread`, you should use `cv2.COLOR_BGR2GRAY`.
68 | But if you are using `matplotlib.image.imread`, you should use `cv2.COLOR_BGR2GRAY`.
69 | Then, I took the derivative in the x-direction, using `cv2.Sobel` (Why? Because vertical lines can be detected better using gradient in the horizontal direction).
70 | Then, I scaled its magnitude into 8bit `255*np.absolute(sobelx)/np.max(abs_sobelx)`, and conervetd to `np.unit8`.
71 | In the end, to generate the binary mage, I used `np.zeros_like`, and applied the threshold.
72 |
73 | For color threshhold, the code includes a function called `color_thresh`. I used HLS colorspace using `cv2.cvtColor(img, cv2.COLOR_BGR2HLS)`.
74 | (Why? because yellow and white colors can be detected well in S space).
75 | Then, I created the binary image `np.zeros_like`, and applied the threshold on the S channel. Also, I have applied the threshold on R space in RGB colorspace. The results are as follows:
76 |
77 | [image20]: ./output_images/for_readme/sx_binary.png
78 | [image21]: ./output_images/for_readme/s_binary.png
79 | [image22]: ./output_images/for_readme/R_binary.png
80 |
81 | Gradient threshhold | S threshhold (HSV) | R threshhold from (RGB)
82 | :-------------------------:|:-------------------------:|:-------------------------:
83 | ![alt text][image20] | ![alt text][image21] | ![alt text][image22]
84 |
85 |
86 | In the end, I have combined the two binary thresholds, and here is an example of my output for this step.
87 |
88 |

89 |
90 | #### 3. Perform a perspective transform.
91 |
92 | The code for my perspective transform includes a function called `warp()`, which appears in lines 55 through 68 in the file `Line_detection_advanced.py`.
93 | The `warp()` function takes as inputs an image (`img`), as well as source (`src`) and destination (`dst`) points.
94 | I chose the hardcode the source and destination points in the following manner:
95 |
96 | ```python
97 | src = np.float32(
98 | [[(img_size[0] / 2) - 55, img_size[1] / 2 + 100],
99 | [((img_size[0] / 6) - 10), img_size[1]],
100 | [(img_size[0] * 5 / 6) + 60, img_size[1]],
101 | [(img_size[0] / 2 + 55), img_size[1] / 2 + 100]])
102 | dst = np.float32(
103 | [[(img_size[0] / 4), 0],
104 | [(img_size[0] / 4), img_size[1]],
105 | [(img_size[0] * 3 / 4), img_size[1]],
106 | [(img_size[0] * 3 / 4), 0]])
107 | ```
108 |
109 | This resulted in the following source and destination points:
110 |
111 | | Source | Destination |
112 | |:-------------:|:-------------:|
113 | | 585, 460 | 320, 0 |
114 | | 203, 720 | 320, 720 |
115 | | 1127, 720 | 960, 720 |
116 | | 695, 460 | 960, 0 |
117 |
118 | I verified that my perspective transform was working as expected by drawing the `src` and `dst` points onto a test image and its warped counterpart to verify that the lines appear parallel in the warped image.
119 |
120 | [image4]: ./output_images/for_readme/road_rectangale.png
121 | [image5]: ./output_images/for_readme/road_rectangale_warped.png
122 |
123 | Original image | undistorted image
124 | :-------------------------:|:-------------------------:
125 | ![alt text][image4] | ![alt text][image5]
126 |
127 |
128 | #### 4.1 Identify lane-line pixels and fit their positions with a polynomial
129 | To find lane pixels, a function called `find_lane_pixels()` is defined.
130 | First, the histogram of the bottom half of the image along the vertical axis is computed using `npsum`.
131 |
132 | 
133 |
134 | Then the peaks in the left half side and right half side of the histogram are computed as the initial estimate of the left and right lines respectively.
135 | Then, the number of sliding windows `nwindows` and horizontal margin `margin` and the minimum number of pixels `minpix` are specified.
136 |
137 | Then to recognize the left and right lines pixel positions, I defined a `for` loop.
138 | To optimize the search process, at every iteration, the horizontal position of the center of the left and right windows is passed to the next iteration to find the boundaries of the next window.
139 | I start processing the bottom windows.
140 | The vertices of each left and right window are computed. The indices of nonzero pixels in x and y directions within the windows are identified.
141 | To visualize this step, the left and right rectangles are plotted on the image using `cv2.rectangle` by specifying two opposite vertices of a rectangle.
142 | I append indices to the main lists of indices, using `np.append`.
143 | If the minimum number of recognized indices in the left and right lists are more than `minpix`, I update the position of the center of the left and right windows.
144 | I continue to process the next window which is the window above the bottom window. I continue processing each window to reach the `nwindows`.
145 |
146 | After the loop ends, I concatenate the arrays of indices (previously was a list of lists of pixels), using `np.concatenate`.
147 | Finally, I extract the left and right line pixel positions as the output of `find_lane_pixels()` function.
148 |
149 | The next step is to fit a 2nd order polynomial using `fit = np.polyfit` to the output of the previous function `find_lane_pixels`.
150 | To do this, I defined a function called `fit_polynomial()`.
151 | To draw polynomials on the image, first I generate x and y values for plotting, using `np.linspace`.
152 | Then I used `fit[0]*ploty**2 + fit[1]*ploty + fit[2]` to have all points on the line for left and right lines, seperately.
153 | To plot them on the image, I use `plt.plot`. Also, I visualize the whole left and right windows on the images.
154 |
155 | The output of the last function is the following figure:
156 |
157 |
158 | [image410]: ./output_images/for_readme/binary_warped_window_pixel_line.png
159 | [image411]: ./output_images/for_readme/road_window.png
160 |
161 |
162 | Binary image | Road image
163 | :-------------------------:|:-------------------------:
164 | ![alt text][image410] | ![alt text][image411]
165 |
166 |
167 | #### 4.2 Detect lane pixels around the detected line. (To optimize: Only for videos after analyzing the first image)
168 |
169 | For analyzing videos, we can use the detected lane lines information from the previous image to speed the code.
170 | To do this, I have defined a function called `search_around_poly`.
171 | The input is the polynomial coefficient of the previous image, and a margin to restrict the area around the line for a search.
172 | (Why? Because the lane lines do not usually jump! )
173 | The output for this section is as follows:
174 |
175 |
176 | #### 5. Calculate the radius of curvature of the lane and the position of the vehicle with respect to the center.
177 |
178 | I have defined a function called `measure_curvature_real` to measure the radius of curvature in meters.
179 | The input to the function is the output of the `fit_polynomial()` function, explinaed in the previus section. The formula is given below:
180 |
181 | 
182 |
183 | To calculate the position of the car with respect to the center of the lane, I have assumed that the camera is placed in the middle of the car.
184 | Then the position of the middle of the lane is calculated as the mean value of the detected left and right lines on the bottom of the image.
185 | The center of the car or camera is calculated by the image size, using `image.shape[1]`.
186 | Then the off-center pixel is the distance between these two numbers, which is then converted to meters.
187 | These two numbers are plotted on the images using `cv2.putText`.
188 |
189 | #### 6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly.
190 |
191 | I implemented this step in lines # through # in my code in `yet_another_file.py` in the function `map_lane()`.
192 | Here is an example of my result on a test image:
193 |
194 | 
195 |
196 | ---
197 |
198 | ### Pipeline (video)
199 |
200 | #### 1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!).
201 |
202 | Here's a [link to my video result](./project_video.mp4)
203 |
204 | ---
205 |
206 | ### Discussion
207 |
208 | #### 1. Briefly discuss any problems/issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?
209 |
210 | Here I'll talk about the approach I took, what techniques I used, what worked and why, where the pipeline might fail and how I might improve it if I were going to pursue this project further.
211 |
212 | This is a project from the Udacity Self Driving car course (https://github.com/udacity/CarND-Advanced-Lane-Lines).
213 |
--------------------------------------------------------------------------------
/camera_cal/calibration10.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration10.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration11.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration11.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration12.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration12.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration13.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration13.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration14.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration14.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration15.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration15.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration16.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration16.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration17.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration17.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration18.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration18.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration19.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration19.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration2.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration20.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration20.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration3.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration4.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration5.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration6.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration7.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration7.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration8.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration8.jpg
--------------------------------------------------------------------------------
/camera_cal/calibration9.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/calibration9.jpg
--------------------------------------------------------------------------------
/camera_cal/test_image.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/camera_cal/test_image.jpg
--------------------------------------------------------------------------------
/examples/binary_combo_example.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/examples/binary_combo_example.jpg
--------------------------------------------------------------------------------
/examples/color_fit_lines.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/examples/color_fit_lines.jpg
--------------------------------------------------------------------------------
/examples/example.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "## Advanced Lane Finding Project\n",
8 | "\n",
9 | "The goals / steps of this project are the following:\n",
10 | "\n",
11 | "* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.\n",
12 | "* Apply a distortion correction to raw images.\n",
13 | "* Use color transforms, gradients, etc., to create a thresholded binary image.\n",
14 | "* Apply a perspective transform to rectify binary image (\"birds-eye view\").\n",
15 | "* Detect lane pixels and fit to find the lane boundary.\n",
16 | "* Determine the curvature of the lane and vehicle position with respect to center.\n",
17 | "* Warp the detected lane boundaries back onto the original image.\n",
18 | "* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.\n",
19 | "\n",
20 | "---\n",
21 | "## First, I'll compute the camera calibration using chessboard images"
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": 1,
27 | "metadata": {},
28 | "outputs": [],
29 | "source": [
30 | "import numpy as np\n",
31 | "import cv2\n",
32 | "import glob\n",
33 | "import matplotlib.pyplot as plt\n",
34 | "%matplotlib qt\n",
35 | "\n",
36 | "# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\n",
37 | "objp = np.zeros((6*9,3), np.float32)\n",
38 | "objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)\n",
39 | "\n",
40 | "# Arrays to store object points and image points from all the images.\n",
41 | "objpoints = [] # 3d points in real world space\n",
42 | "imgpoints = [] # 2d points in image plane.\n",
43 | "\n",
44 | "# Make a list of calibration images\n",
45 | "images = glob.glob('../camera_cal/calibration*.jpg')\n",
46 | "\n",
47 | "# Step through the list and search for chessboard corners\n",
48 | "for fname in images:\n",
49 | " img = cv2.imread(fname)\n",
50 | " gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n",
51 | "\n",
52 | " # Find the chessboard corners\n",
53 | " ret, corners = cv2.findChessboardCorners(gray, (9,6),None)\n",
54 | "\n",
55 | " # If found, add object points, image points\n",
56 | " if ret == True:\n",
57 | " objpoints.append(objp)\n",
58 | " imgpoints.append(corners)\n",
59 | "\n",
60 | " # Draw and display the corners\n",
61 | " img = cv2.drawChessboardCorners(img, (9,6), corners, ret)\n",
62 | " cv2.imshow('img',img)\n",
63 | " cv2.waitKey(500)\n",
64 | "\n",
65 | "cv2.destroyAllWindows()"
66 | ]
67 | },
68 | {
69 | "cell_type": "markdown",
70 | "metadata": {},
71 | "source": [
72 | "## And so on and so forth..."
73 | ]
74 | },
75 | {
76 | "cell_type": "code",
77 | "execution_count": null,
78 | "metadata": {
79 | "collapsed": true
80 | },
81 | "outputs": [],
82 | "source": []
83 | }
84 | ],
85 | "metadata": {
86 | "anaconda-cloud": {},
87 | "kernelspec": {
88 | "display_name": "Python 3",
89 | "language": "python",
90 | "name": "python3"
91 | },
92 | "language_info": {
93 | "codemirror_mode": {
94 | "name": "ipython",
95 | "version": 3
96 | },
97 | "file_extension": ".py",
98 | "mimetype": "text/x-python",
99 | "name": "python",
100 | "nbconvert_exporter": "python",
101 | "pygments_lexer": "ipython3",
102 | "version": "3.7.7"
103 | }
104 | },
105 | "nbformat": 4,
106 | "nbformat_minor": 1
107 | }
108 |
--------------------------------------------------------------------------------
/examples/example.py:
--------------------------------------------------------------------------------
1 | def warper(img, src, dst):
2 |
3 | # Compute and apply perpective transform
4 | img_size = (img.shape[1], img.shape[0])
5 | M = cv2.getPerspectiveTransform(src, dst)
6 | warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_NEAREST) # keep same size as input image
7 |
8 | return warped
9 |
--------------------------------------------------------------------------------
/examples/example_output.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/examples/example_output.jpg
--------------------------------------------------------------------------------
/examples/undistort_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/examples/undistort_output.png
--------------------------------------------------------------------------------
/examples/warped_straight_lines.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/examples/warped_straight_lines.jpg
--------------------------------------------------------------------------------
/output_images/Line_detection_advanced.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 | import matplotlib.pyplot as plt
4 | import matplotlib.image as mpimg
5 | import glob
6 |
7 | def grad_thresh(img, thresh=(20,100)):
8 | # Gradient thresholds:
9 | # Grayscale image
10 | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
11 |
12 | # Sobel x
13 | sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
14 | abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
15 | scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
16 |
17 | # Threshold x gradient
18 | sxbinary = np.zeros_like(scaled_sobel)
19 | sxbinary[(scaled_sobel >= thresh[0]) & (scaled_sobel <= thresh[1])] = 1
20 | return sxbinary
21 |
22 | def colorHSV_thresh(img, thresh=(130,255)):
23 | # Color thresholds:
24 | # Convert to HLS color space and separate the S channel
25 | hls = cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
26 | s_channel = hls[:,:,2]
27 | # Threshold color channel
28 | s_binary = np.zeros_like(s_channel)
29 | s_binary[(s_channel >= thresh[0]) & (s_channel <= thresh[1])] = 1
30 | return s_binary
31 | def colorBGR_thresh(img, thresh=(200,255)):
32 | # Color thresholds:
33 | R_channel = img[:,:,2]
34 | # Threshold color channel
35 | R_binary = np.zeros_like(R_channel)
36 | R_binary[(R_channel >= thresh[0]) & (R_channel <= thresh[1])] = 1
37 | return R_binary
38 |
39 | def warp(img, src, dst, img_size):
40 |
41 | #compute the perspective trasform, matrix M
42 | M = cv2.getPerspectiveTransform(src, dst)
43 | # Could compute the inverse by swapping the input parameters
44 | Minv = cv2.getPerspectiveTransform(dst, src)
45 |
46 | # creat warped image -uses linear interpolation
47 | warped = cv2.warpPerspective(img, M, img_size,flags=cv2.INTER_LINEAR)
48 |
49 | return warped, M, Minv
50 |
51 | def find_lane_pixels(binary_warped):
52 |
53 | # Take a histogram of the bottom half of the image
54 | histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
55 | # a=plt.figure()
56 | # plt.plot(histogram)
57 | # plt.ylabel('Histogram')
58 | # plt.xlabel('X (pixels)')
59 | # plt.savefig('histogram.png')#for readme
60 | # plt.show()
61 |
62 | # Create an output image to draw on and visualize the result
63 | out_img = np.dstack((binary_warped, binary_warped, binary_warped))
64 | # Find the peak of the left and right halves of the histogram
65 | # These will be the starting point for the left and right lines
66 | midpoint = np.int(histogram.shape[0]//2)
67 | leftx_base = np.argmax(histogram[:midpoint])
68 | rightx_base = np.argmax(histogram[midpoint:]) + midpoint
69 |
70 | # HYPERPARAMETERS
71 | # Choose the number of sliding windows
72 | nwindows = 9
73 | # Set the width of the windows +/- margin
74 | margin = 100
75 | # Set minimum number of pixels found to recenter window
76 | minpix = 100#increased this number to avoid being misled by shades
77 |
78 | # Set height of windows - based on nwindows above and image shape
79 | window_height = np.int(binary_warped.shape[0]//nwindows)
80 | # Identify the x and y positions of all nonzero pixels in the image
81 | nonzero = binary_warped.nonzero()
82 | nonzeroy = np.array(nonzero[0])
83 | nonzerox = np.array(nonzero[1])
84 | # Current positions to be updated later for each window in nwindows
85 | leftx_current = leftx_base
86 | rightx_current = rightx_base
87 |
88 | # Create empty lists to receive left and right lane pixel indices
89 | left_lane_inds = []
90 | right_lane_inds = []
91 |
92 | # Step through the windows one by one
93 | for window in range(nwindows):
94 | # Identify window boundaries in x and y (and right and left)
95 | win_y_low = binary_warped.shape[0] - (window+1)*window_height
96 | win_y_high = binary_warped.shape[0] - window*window_height
97 | win_xleft_low = leftx_current - margin
98 | win_xleft_high = leftx_current + margin
99 | win_xright_low = rightx_current - margin
100 | win_xright_high = rightx_current + margin
101 |
102 | # Draw the windows on the visualization image
103 | cv2.rectangle(out_img,(win_xleft_low,win_y_low),
104 | (win_xleft_high,win_y_high),(0,255,0), 2)
105 | cv2.rectangle(out_img,(win_xright_low,win_y_low),
106 | (win_xright_high,win_y_high),(0,255,0), 2)
107 |
108 | # Identify the nonzero pixels in x and y within the window #
109 | good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
110 | (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
111 | good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
112 | (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
113 |
114 | # Append these indices to the lists
115 | left_lane_inds.append(good_left_inds)
116 | right_lane_inds.append(good_right_inds)
117 |
118 | # If you found > minpix pixels, recenter next window on their mean position
119 | if len(good_left_inds) > minpix:
120 | leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
121 | if len(good_right_inds) > minpix:
122 | rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
123 |
124 | # Concatenate the arrays of indices (previously was a list of lists of pixels)
125 | try:
126 | left_lane_inds = np.concatenate(left_lane_inds)
127 | right_lane_inds = np.concatenate(right_lane_inds)
128 | except ValueError:
129 | # Avoids an error if the above is not implemented fully
130 | pass
131 |
132 | # Extract left and right line pixel positions
133 | leftx = nonzerox[left_lane_inds]
134 | lefty = nonzeroy[left_lane_inds]
135 | rightx = nonzerox[right_lane_inds]
136 | righty = nonzeroy[right_lane_inds]
137 |
138 | return leftx, lefty, rightx, righty, out_img, left_lane_inds, right_lane_inds, nonzeroy, nonzerox
139 |
140 | def fit_polynomial(leftx, lefty, rightx, righty, size_binary_warped_0):
141 |
142 | # Fit a second order polynomial to each using `np.polyfit`
143 | left_fit = np.polyfit(lefty, leftx, 2)
144 | right_fit = np.polyfit(righty, rightx, 2)
145 |
146 | # Generate x and y values for plotting
147 | # to avoid estimating the part of line that there is no data, do not start from 0
148 | ind_0 = max(min(lefty),min(righty)) #size_binary_warped_0 - max(min(lefty),min(righty))
149 | #print(ind_0)
150 | ploty = np.linspace(ind_0, size_binary_warped_0-1, size_binary_warped_0 )
151 |
152 | try:
153 | left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
154 | right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
155 | except TypeError:
156 | # Avoids an error if `left` and `right_fit` are still none or incorrect
157 | print('The function failed to fit a line!')
158 | left_fitx = 1*ploty**2 + 1*ploty
159 | right_fitx = 1*ploty**2 + 1*ploty
160 |
161 | return ploty, left_fitx, right_fitx, left_fit, right_fit
162 |
163 | def measure_curvature_real(ploty, left_fit_cr, right_fit_cr, ym_per_pix, xm_per_pix):
164 | '''
165 | Calculates the curvature of polynomial functions in meters.
166 | '''
167 | # Define y-value where we want radius of curvature
168 | # We'll choose the maximum y-value, corresponding to the bottom of the image
169 | y_eval = np.max(ploty)
170 |
171 | # Calculation of R_curve (radius of curvature)
172 | left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
173 | right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
174 |
175 | return left_curverad, right_curverad
176 |
177 | def measure_off_center_real(left_fit_cr_0, right_fit_cr_0, img_size_0,xm_per_pix):
178 | lane_mid = (left_fit_cr_0 + right_fit_cr_0)/2
179 | middle_img = img_size_0/2
180 | car_off_center = (lane_mid - middle_img)*xm_per_pix
181 |
182 | return car_off_center
183 |
184 | def unwarp(img, Minv, img_size):
185 |
186 | # creat warped image -uses linear interpolation
187 | unwarped = cv2.warpPerspective(img, Minv, img_size,flags=cv2.INTER_LINEAR)
188 |
189 | return unwarped
190 |
191 | def draw_line(out_img, left_fitx, right_fitx, ploty):
192 | # only for 1d polynomial:
193 | # color=[255, 255, 0]
194 | # thickness=5
195 | # cv2.line(out_img, (left_fitx[0].astype(int), ploty[0].astype(int)),\
196 | # (left_fitx[-1].astype(int), ploty[-1].astype(int)), color, thickness)
197 | # cv2.line(out_img, (right_fitx[0].astype(int), ploty[0].astype(int)),\
198 | # (right_fitx[-1].astype(int), ploty[-1].astype(int)), color, thickness)
199 | # for curved polynomials:
200 | draw_points_left = (np.asarray([left_fitx, ploty]).T).astype(np.int32) # needs to be int32 and transposed
201 | draw_points_right = (np.asarray([right_fitx, ploty]).T).astype(np.int32) # needs to be int32 and transposed
202 | cv2.polylines(out_img, [draw_points_left], False, (0, 255,0),7) # args: image, points, closed, color
203 | cv2.polylines(out_img, [draw_points_right], False, (0, 255,0),7) # args: image, points, closed, color
204 |
205 | # Plots the left and right polynomials on the lane lines
206 | # plt.plot(left_fitx, ploty, color='yellow')
207 | # plt.plot(right_fitx, ploty, color='yellow')
208 | return out_img
209 |
210 | def search_around_poly(binary_warped, left_fit, right_fit,margin):
211 | # HYPERPARAMETER
212 | # Choose the width of the margin around the previous polynomial to search
213 | # The quiz grader expects 100 here, but feel free to tune on your own!
214 | margin = 100
215 |
216 | # Grab activated pixels
217 | nonzero = binary_warped.nonzero()
218 | nonzeroy = np.array(nonzero[0])
219 | nonzerox = np.array(nonzero[1])
220 |
221 | ### TO-DO: Set the area of search based on activated x-values ###
222 | ### within the +/- margin of our polynomial function ###
223 | ### Hint: consider the window areas for the similarly named variables ###
224 | ### in the previous quiz, but change the windows to our new search area ###
225 | left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
226 | left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
227 | left_fit[1]*nonzeroy + left_fit[2] + margin)))
228 | right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
229 | right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
230 | right_fit[1]*nonzeroy + right_fit[2] + margin)))
231 |
232 | # Again, extract left and right line pixel positions
233 | leftx = nonzerox[left_lane_inds]
234 | lefty = nonzeroy[left_lane_inds]
235 | rightx = nonzerox[right_lane_inds]
236 | righty = nonzeroy[right_lane_inds]
237 |
238 | # # Fit new polynomials
239 | # left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty)
240 |
241 | return leftx, lefty, rightx, righty, result #out_img
242 |
243 | def visualize_detected_pixels(out_img, lefty, leftx, righty, rightx):
244 | """show detected pixels on the warped image"""
245 | # Colors in the left and right lane regions
246 | out_img[lefty, leftx] = [255, 0, 0]
247 | out_img[righty, rightx] = [0, 0, 255]
248 |
249 | return out_img
250 |
251 | def visualize_region_search_around_poly(binary_warped, left_lane_inds, right_lane_inds, left_fitx, right_fitx, margin_around_line, ploty, nonzeroy, nonzerox):
252 | """ draw region around poly on the road_box """
253 | # Create an image to draw on and an image to show the selection window
254 | out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
255 | window_img = np.zeros_like(out_img)
256 | # Color in left and right line pixels
257 | out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
258 | out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
259 |
260 | # plt.imshow(out_img)
261 | # plt.title('out_img', fontsize=10)
262 | # mpimg.imsave("out_img.png", out_img)
263 | # plt.show()
264 |
265 | # Generate a polygon to illustrate the search window area
266 | # And recast the x and y points into usable format for cv2.fillPoly()
267 | left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin_around_line, ploty]))])
268 | left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin_around_line,
269 | ploty])))])
270 | left_line_pts = np.hstack((left_line_window1, left_line_window2))
271 | right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin_around_line, ploty]))])
272 | right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin_around_line,
273 | ploty])))])
274 | right_line_pts = np.hstack((right_line_window1, right_line_window2))
275 |
276 | # Draw the lane onto the warped blank image
277 | cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))
278 | cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))
279 | result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
280 |
281 | # plt.imshow(result)
282 | # plt.title('result', fontsize=10)
283 | # plt.show()
284 | # mpimg.imsave("result.png", result)
285 |
286 | # Plot the polynomial lines onto the image
287 | plt.plot(left_fitx, ploty, color='yellow')
288 | plt.plot(right_fitx, ploty, color='yellow')
289 | ## End visualization steps ##
290 |
291 | return result
292 |
293 | def visualize_window_serach(binary_warped_window_pixel, undist_road,Minv, img_size ):
294 | """ draw search windows on the road """
295 |
296 | binary_warped_window_pixel_unwraped = unwarp(binary_warped_window_pixel, Minv, img_size)
297 | # plt.imshow(black_region_unwraped)
298 | # plt.title('black_region_unwraped', fontsize=10)
299 | # mpimg.imsave("black_region_unwraped.png", black_region_unwraped)
300 | # plt.show()
301 |
302 | road_window = cv2.addWeighted(undist_road, 1., binary_warped_window_pixel_unwraped, 0.8, 0.)
303 | # plt.imshow(road_region)
304 | # plt.show()
305 | # mpimg.imsave("road_region.png", road_region)#for readme
306 |
307 | return road_window
308 |
309 | def visualize_lane(binary_warped,undist_road, ploty, left_fitx, right_fitx, Minv, img_size):
310 | """ Draw lane area on the road """
311 | # Create an image to draw the lines on
312 | warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
313 | color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
314 | # Recast the x and y points into usable format for cv2.fillPoly()
315 | pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
316 | pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
317 | pts = np.hstack((pts_left, pts_right))
318 | # Draw the lane onto the warped blank image
319 | cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
320 | # Warp the blank back to original image space using inverse perspective matrix (Minv)
321 | #newwarp = cv2.warpPerspective(color_warp, Minv, (image.shape[1], image.shape[0]))
322 | newwarp = unwarp(color_warp, Minv, img_size)
323 | # Combine the result with the original image
324 | result = cv2.addWeighted(undist_road, 1, newwarp, 0.3, 0)
325 | #plt.imshow(result)
326 | return result
327 |
328 | def visualize_lines(undist_road, src, dst, img_size,left_fitx, right_fitx, ploty ):
329 | """ Draw lane lines on the road_line """
330 | warped_road, M, Minv = warp(undist_road, src, dst, img_size)
331 | # plt.imshow(warped_road)
332 | # plt.title('warped road with rectangle', fontsize=10)
333 | # mpimg.imsave("warped_road.png", warped_road)#for readme
334 | # plt.show()
335 |
336 | # plot lines on the warped road image:
337 | black_wraped = np.zeros_like(warped_road)
338 | black_line_wraped = draw_line(black_wraped, left_fitx, right_fitx, ploty)
339 | # plt.imshow(black_line_wraped)
340 | # plt.title('black_line_wraped', fontsize=10)
341 | # mpimg.imsave("black_line_wraped.png", black_line_wraped)
342 | # plt.show()
343 |
344 | black_line_unwraped = unwarp(black_line_wraped, Minv, img_size)
345 | # plt.imshow(black_line_unwraped)
346 | # plt.title('black_line_unwraped', fontsize=10)
347 | # mpimg.imsave("black_line_unwraped.png", black_line_unwraped)
348 | # plt.show()
349 |
350 | road_line = cv2.addWeighted(undist_road, 1., black_line_unwraped, 0.8, 0.)
351 | # plt.imshow(road_line)
352 | # plt.show()
353 | # mpimg.imsave("road_line.png", road_line)#for readme
354 | return road_line
355 | def visualize_perspective_transfor(undist_road, src):
356 | # For fun: get perspective transform of the original road image
357 | # plt.plot(src[0,0],src[0,1],'.')
358 | # plt.plot(src[1,0],src[1,1],'.')
359 | # plt.plot(src[2,0],src[2,1],'.')
360 | #plt.plot(src[3,0],src[3,1],'.')
361 | #plt.plot(dst[0,0],dst[0,1],'.')
362 | #plt.plot(dst[1,0],dst[1,1],'.')
363 | #plt.plot(dst[2,0],dst[2,1],'.')
364 | #plt.plot(dst[3,0],dst[3,1],'.')
365 | road_rectangale = cv2.line(undist_road, (src[0,0],src[0,1]), (src[1,0],src[1,1]), (0, 255, 0) , 2)
366 | road_rectangale=cv2.line(road_rectangale, (src[3,0],src[3,1]), (src[2,0],src[2,1]), (0, 255, 0) , 2)
367 | road_rectangale=cv2.line(road_rectangale, (src[0,0],src[0,1]), (src[3,0],src[3,1]), (0, 255, 0) , 2)
368 | road_rectangale=cv2.line(road_rectangale, (src[1,0],src[1,1]), (src[2,0],src[2,1]), (0, 255, 0) , 2)
369 | return road_rectangale
370 |
371 | def Lane_Finding_Pipeline_Image_Advanced(image_road):
372 | """ Main pipline to detect lane lines"""
373 | # data = np.load('calib_info.npz')
374 | # mtx = data['mtx']
375 | # dist = data['dist']
376 | # print(mtx)
377 | # print(dist)
378 | mtx = np.float32([[1.15777818*10**3, 0.00000000, 6.67113857*10**2],\
379 | [0.00000000, 1.15282217*10**3, 3.86124583*10**2],\
380 | [0.0000000, 0.00000000, 1.00000000]])
381 | dist = np.float32([[-0.24688507, -0.02373155 ,-0.00109831, 0.00035107, -0.00259868]])
382 |
383 | # undist_roadorting the test image_road:
384 | undist_road = cv2.undistort(image_road, mtx, dist, None, mtx)
385 |
386 | # f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
387 | # f.tight_layout()
388 | # ax1.imshow(image_road)
389 | # ax1.set_title('Original Image', fontsize=10)
390 | # ax2.imshow(undist_road)
391 | # ax2.set_title('Undistorted Image', fontsize=10)
392 | # plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
393 | # mpimg.imsave("road_undistorted.png", undist_road)# for readme
394 | # plt.show()
395 |
396 | # Note: img is the undistorted image
397 | img = np.copy(undist_road)
398 |
399 | sx_binary = grad_thresh(img, thresh=(10,100))#20, 100
400 | s_binary = colorHSV_thresh(img, thresh=(125,255))
401 | R_binary = colorBGR_thresh(img, thresh=(200,255))#240,255
402 |
403 | # Stack each channel to view their individual contributions in green and blue respectively
404 | # This returns a stack of the two binary images, whose components you can see as different colors
405 | # color_binary = np.dstack(( np.zeros_like(sx_binary), sx_binary, s_binary)) * 255
406 |
407 | # Combine the two binary thresholds
408 | combined_binary = np.zeros_like(sx_binary)
409 | combined_binary[(s_binary == 1) | (sx_binary == 1) | (R_binary == 1)] = 1
410 |
411 | # f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(24, 9))
412 | # f.tight_layout()
413 | # ax1.imshow(sx_binary)
414 | # ax1.set_title('grad thresh binary (sobel x)', fontsize=10)
415 | # ax2.imshow(s_binary)
416 | # ax2.set_title('color thresh binary (S from HSV)', fontsize=10)
417 | # ax3.imshow(R_binary)
418 | # ax3.set_title('color thresh binary (R from BGR)', fontsize=10)
419 | # ax4.imshow(combined_binary)
420 | # ax4.set_title('grad & color combined', fontsize=10)
421 | # plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
422 | # plt.show()
423 |
424 | # Define calibration box in source (original) and destination
425 | # (desired, warped coordinates)
426 | img_size = (img.shape[1], img.shape[0])
427 |
428 | # 4 source image points
429 | src = np.float32(
430 | [[(img_size[0] / 2) - 60, img_size[1] / 2 + 100],#top left
431 | [((img_size[0] / 6) - 10), img_size[1]],#bottomleft
432 | [(img_size[0] * 5 / 6) + 45, img_size[1]],# bottom right
433 | [(img_size[0] / 2 + 60), img_size[1] / 2 + 100]])# top right
434 |
435 | # 4 desired coordinates
436 | dst = np.float32(
437 | [[(img_size[0] / 4), 0],
438 | [(img_size[0] / 4), img_size[1]],
439 | [(img_size[0] * 3 / 4), img_size[1]],
440 | [(img_size[0] * 3 / 4), 0]])
441 |
442 | # get perspective transform of the binary image
443 | binary_warped, M, Minv = warp(combined_binary, src, dst, img_size)
444 | # plt.imshow(binary_warped)
445 | # plt.title('binary warped (original to pixel)', fontsize=10)
446 | # plt.show()
447 |
448 | #TODO: write the if condition:
449 | margin_around_line = 100
450 | # if not left_fit:
451 | # Find our lane pixels first
452 | leftx, lefty, rightx, righty, binary_warped_window,\
453 | left_lane_inds, right_lane_inds,nonzeroy, nonzerox \
454 | = find_lane_pixels(binary_warped)
455 |
456 | # plt.imshow(binary_warped_window)
457 | # plt.title('binary_warped_window', fontsize=10)
458 | # plt.show()
459 |
460 | binary_warped_window_pixel = visualize_detected_pixels(binary_warped_window, lefty, leftx, righty, rightx)
461 | # plt.imshow(binary_warped_window_pixel)
462 | # plt.title('binary_warped_window_pixel', fontsize=10)
463 | # plt.show()
464 |
465 | # Fit a polynomial
466 | ploty, left_fitx, right_fitx, left_fit, right_fit \
467 | = fit_polynomial(leftx, lefty, rightx, righty, binary_warped.shape[0])
468 |
469 | binary_warped_window_pixel_line = draw_line(binary_warped_window_pixel, left_fitx, right_fitx, ploty)
470 | # plt.imshow(binary_warped_window_pixel_line)
471 | # plt.title('binary_warped_window_pixel_line', fontsize=10)
472 | # plt.show()
473 | # else:
474 | # leftx, lefty, rightx, righty, binary_warped_pixel = search_around_poly(binary_warped, left_fit, right_fit, margin_around_line)
475 | # # plt.imshow(binary_warped_pixel)
476 | # # plt.title('binary warped pixel (search around)', fontsize=10)
477 | # # plt.show()
478 | # # Fit a polynomial
479 | # ploty, left_fitx, right_fitx, left_fit, right_fit = fit_polynomial(binary_warped_pixel, leftx, lefty, rightx, righty, binary_warped.shape[0])
480 | # #print(left_fit)
481 | # # visualize_region_search_around_poly(binary_warped, left_lane_inds, right_lane_inds, left_fitx, right_fitx, ploty):
482 | # # uuwarped_binary = unwarp(binary_warped_line, Minv, img_size)
483 | # # plt.imshow(uuwarped_binary)
484 | # # plt.title('unwarped binary', fontsize=10)
485 | # # plt.show()
486 |
487 | # Define conversions in x and y from pixels space to meters
488 | ym_per_pix = 30/720 # meters per pixel in y dimension
489 | xm_per_pix = 3.7/700 # meters per pixel in x dimension
490 |
491 | # calculate the curve raduis in meters
492 | left_curverad, right_curverad = measure_curvature_real(ploty, left_fitx, right_fitx, ym_per_pix, xm_per_pix)
493 | #print(left_curverad, 'm', right_curverad, 'm')
494 | #calculate average of curvature raduis
495 | R_curve = (left_curverad + right_curverad)/2
496 |
497 | # calculate car offset from center of lane
498 | car_off_center = measure_off_center_real(left_fitx[0], right_fitx[0], img_size[0],xm_per_pix)
499 |
500 | text_R = '{} meters raduis of curvature'.format(round(R_curve,2))
501 | if car_off_center >= 0:
502 | text_C = '{} meters left of center'.format(round(car_off_center,2))
503 | else:
504 | text_C = '{} meters right of center'.format(round(-car_off_center,2))
505 |
506 | # Using cv2.putText() method
507 | # cv2.putText(undist_road, text_C, (50, 50), cv2.FONT_HERSHEY_SIMPLEX,
508 | # 1, (255, 0, 0), 2, cv2.LINE_AA)
509 | # cv2.putText(undist_road, text_R, (50, 100), cv2.FONT_HERSHEY_SIMPLEX,
510 | # 1, (255, 0, 0), 2, cv2.LINE_AA)
511 |
512 | # road_window = visualize_window_serach(binary_warped_window_pixel_line, undist_road,Minv, img_size )
513 | # road_lines = visualize_lines(undist_road, src, dst, img_size,left_fitx, right_fitx, ploty )
514 |
515 | road_lane = visualize_lane(binary_warped,undist_road, ploty, left_fitx, right_fitx, Minv, img_size)
516 |
517 | ## VISULAIZE for readme:
518 | # undist_road_temp = np.copy(undist_road)
519 | # road_rectangale = visualize_perspective_transfor(undist_road_temp, src)
520 | # plt.imshow(road_rectangale)
521 | # plt.title('road with rectangle', fontsize=10)
522 | # mpimg.imsave("road_rectangale.png", road_rectangale)#for readme
523 | # plt.show()
524 | # road_rectangale_warped, M, Minv = warp(road_rectangale, src, dst, img_size)
525 | # plt.imshow(road_rectangale_warped)
526 | # plt.title('road_rectangale_warped', fontsize=10)
527 | # mpimg.imsave("road_rectangale_warped.png", road_rectangale_warped)#for readme
528 | # plt.show()
529 | # mpimg.imsave("road_undistorted.png", undist_road)
530 | # mpimg.imsave("sx_binary.png", sx_binary)
531 | # mpimg.imsave("s_binary.png", s_binary)
532 | # mpimg.imsave("R_binary.png", R_binary)
533 | # mpimg.imsave("cmbined_binary.png", combined_binary)
534 | # mpimg.imsave("binary_warped_window_pixel.png", binary_warped_window_pixel)
535 | # mpimg.imsave("binary_warped_window_pixel_line.png", binary_warped_window_pixel_line)# for readme
536 | # mpimg.imsave("road_window.png", road_window)
537 |
538 | return road_lane
539 |
540 | """
541 | cv2.destroyAllWindows()
542 | # Make a list of calibration images
543 | # images = glob.glob('../test_images/straight_lines*.jpg')
544 | images = glob.glob('../test_images/test*.jpg')
545 | # Step through the list and search for chessboard corners
546 | ind_for = 0
547 | for fname in images:
548 | #print(fname)
549 | image = cv2.imread(fname)
550 |
551 | #printing out some stats and plotting
552 | print('This image is:', type(image), 'with dimensions:', image.shape)
553 |
554 | result = Lane_Finding_Pipeline_Image_Advanced(image)
555 | plt.imshow(result)
556 | plt.show()
557 | #mpimg.imsave('{:03d}.png'.format(fname), result)#for readme
558 | mpimg.imsave("frame" + str(ind_for) + ".png", result)
559 | #"file_" + str(i) + ".dat", 'w', encoding='utf-8'
560 | cv2.waitKey(500) # waits until a key is pressed
561 | ind_for = ind_for +1
562 |
563 | cv2.destroyAllWindows()# destroys the window showing image
564 |
565 |
566 | image_path = '../test_images/straight_lines1.jpg'
567 | image = cv2.imread(image_path)
568 | road_lane = Lane_Finding_Pipeline_Image_Advanced(image)
569 | # plt.imshow(road_lane)
570 | # plt.show()
571 | # mpimg.imsave("road_lane.png", road_lane)
572 | """
573 |
574 | # Import everything needed to edit/save/watch video clips
575 | from moviepy.editor import VideoFileClip
576 | from IPython.display import HTML
577 |
578 | video_out = 'project_video_output.mp4'
579 | video_out = 'challenge_video_output.mp4'
580 | video_out = 'harder_challenge_video.mp4'
581 |
582 | ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
583 | ## To do so add .subclip(start_second,end_second) to the end of the line below
584 | ## Where start_second and end_second are integer values representing the start and end of the subclip
585 | ## You may also uncomment the following line for a subclip of the first 5 seconds
586 | ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
587 |
588 | clip1 = VideoFileClip("../project_video.mp4")#.subclip(0,2)
589 | clip1 = VideoFileClip("../challenge_video.mp4")#.subclip(0,2)
590 | clip1 = VideoFileClip("../harder_challenge_video.mp4")#.subclip(0,2)
591 | clip = clip1.fl_image(Lane_Finding_Pipeline_Image_Advanced) #NOTE: this function expects color images!!
592 | clip.write_videofile(video_out, audio=False)
593 |
--------------------------------------------------------------------------------
/output_images/calib_info.npz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/calib_info.npz
--------------------------------------------------------------------------------
/output_images/camera_calibration.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 | import glob
4 | import matplotlib.pyplot as plt
5 | import matplotlib.image as mpimg
6 | #%matplotlib qt
7 |
8 | # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
9 | objp = np.zeros((6*9,3), np.float32)
10 | objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
11 |
12 | # Arrays to store object points and image points from all the images.
13 | objpoints = [] # 3d points in real world space
14 | imgpoints = [] # 2d points in image plane.
15 |
16 | # Make a list of calibration images
17 | images = glob.glob('camera_cal/calibration*.jpg')
18 |
19 | # Step through the list and search for chessboard corners
20 | for fname in images:
21 | #print(fname)
22 | img = cv2.imread(fname)
23 | gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
24 |
25 | # Find the chessboard corners
26 | ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
27 |
28 | # If found, add object points, image points
29 | if ret == True:
30 | objpoints.append(objp)
31 | imgpoints.append(corners)
32 |
33 | # Draw and display the corners
34 | #img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
35 | #cv2.imshow('img',img)
36 | #cv2.waitKey(500) # waits until a key is pressed
37 |
38 | # cv2.destroyAllWindows()# destroys the window showing image
39 |
40 | # Camera calibration, given object points, image points, and the shape of the grayscale image:
41 | ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
42 | np.savez('output_images/calib_info.npz', mtx=mtx, dist=dist)
43 |
44 | # Undistord a chessboard:
45 | # Read in the distorted test image
46 | img = cv2.imread('camera_cal/test_image.jpg')
47 | # Undistorting the test image:
48 | undist = cv2.undistort(img, mtx, dist, None, mtx)
49 | f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
50 | f.tight_layout()
51 | ax1.imshow(img)
52 | ax1.set_title('Original Image', fontsize=50)
53 | ax2.imshow(undist)
54 | ax2.set_title('Undistorted Image', fontsize=50)
55 | plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
56 | plt.show()
57 | mpimg.imsave("output_images/chessboard_undistorted.png", undist)
58 |
--------------------------------------------------------------------------------
/output_images/for_readme/R_binary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/R_binary.png
--------------------------------------------------------------------------------
/output_images/for_readme/R_curve_formula.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/R_curve_formula.png
--------------------------------------------------------------------------------
/output_images/for_readme/binary_warped_window_pixel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/binary_warped_window_pixel.png
--------------------------------------------------------------------------------
/output_images/for_readme/binary_warped_window_pixel_line.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/binary_warped_window_pixel_line.png
--------------------------------------------------------------------------------
/output_images/for_readme/chessboard_undistorted.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/chessboard_undistorted.png
--------------------------------------------------------------------------------
/output_images/for_readme/cmbined_binary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/cmbined_binary.png
--------------------------------------------------------------------------------
/output_images/for_readme/histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/histogram.png
--------------------------------------------------------------------------------
/output_images/for_readme/road_lane.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/road_lane.png
--------------------------------------------------------------------------------
/output_images/for_readme/road_rectangale.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/road_rectangale.png
--------------------------------------------------------------------------------
/output_images/for_readme/road_rectangale_warped.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/road_rectangale_warped.png
--------------------------------------------------------------------------------
/output_images/for_readme/road_undistorted.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/road_undistorted.png
--------------------------------------------------------------------------------
/output_images/for_readme/road_window.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/road_window.png
--------------------------------------------------------------------------------
/output_images/for_readme/s_binary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/s_binary.png
--------------------------------------------------------------------------------
/output_images/for_readme/sx_binary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/output_images/for_readme/sx_binary.png
--------------------------------------------------------------------------------
/output_images/save_output_here.txt:
--------------------------------------------------------------------------------
1 | Please save your output images to this folder and include a description in your README of what each image shows.
--------------------------------------------------------------------------------
/test_images/straight_lines1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/straight_lines1.jpg
--------------------------------------------------------------------------------
/test_images/straight_lines2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/straight_lines2.jpg
--------------------------------------------------------------------------------
/test_images/test1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test1.jpg
--------------------------------------------------------------------------------
/test_images/test2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test2.jpg
--------------------------------------------------------------------------------
/test_images/test3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test3.jpg
--------------------------------------------------------------------------------
/test_images/test4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test4.jpg
--------------------------------------------------------------------------------
/test_images/test5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test5.jpg
--------------------------------------------------------------------------------
/test_images/test6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test6.jpg
--------------------------------------------------------------------------------
/test_images/test7.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test7.jpg
--------------------------------------------------------------------------------
/test_images/test8.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test8.jpg
--------------------------------------------------------------------------------
/test_images/test9.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbshbn/Lane-Line-Detection-using-color-transform-and-gradient/fc25453689a5568f59c8a4473f50a45c38de2b62/test_images/test9.jpg
--------------------------------------------------------------------------------