├── .DS_Store ├── .gitignore ├── README.md ├── camera_cal ├── calibration1.jpg ├── calibration10.jpg ├── calibration11.jpg ├── calibration12.jpg ├── calibration13.jpg ├── calibration14.jpg ├── calibration15.jpg ├── calibration16.jpg ├── calibration17.jpg ├── calibration18.jpg ├── calibration19.jpg ├── calibration2.jpg ├── calibration20.jpg ├── calibration3.jpg ├── calibration4.jpg ├── calibration5.jpg ├── calibration6.jpg ├── calibration7.jpg ├── calibration8.jpg └── calibration9.jpg ├── data ├── imgpoints.npy ├── objpoints.npy └── shape.npy ├── example_writeup.pdf ├── examples ├── .ipynb_checkpoints │ └── example-checkpoint.ipynb ├── binary_combo_example.jpg ├── color_fit_lines.jpg ├── example.ipynb ├── example.py ├── example_output.jpg ├── undistort_output.png └── warped_straight_lines.jpg ├── main.py ├── output_images ├── final.jpg ├── polylines.jpg ├── thresholded.jpg ├── undistorted.jpg └── warped.jpg ├── polydrawer.py ├── polyfitter.py ├── project_video.mp4 ├── project_video_done.mp4 ├── test_images ├── straight_lines1.jpg ├── straight_lines2.jpg ├── test1.jpg ├── test2.jpg ├── test3.jpg ├── test4.jpg ├── test5.jpg └── test6.jpg ├── thresholder.py ├── undistorter.py ├── warper.py ├── writeup.md └── writeup_template.md /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/.DS_Store -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | __pycache__ 3 | *~ 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Advanced Lane Finding 2 | [![Udacity - Self-Driving Car NanoDegree](https://s3.amazonaws.com/udacity-sdc/github/shield-carnd.svg)](http://www.udacity.com/drive) 3 | 4 | 5 | In this project, your goal is to write a software pipeline to identify the lane boundaries in a video, but the main output or product we want you to create is a detailed writeup of the project. Check out the [writeup template](https://github.com/udacity/CarND-Advanced-Lane-Lines/blob/master/writeup_template.md) for this project and use it as a starting point for creating your own writeup. 6 | 7 | Creating a great writeup: 8 | --- 9 | A great writeup should include the rubric points as well as your description of how you addressed each point. You should include a detailed description of the code used in each step (with line-number references and code snippets where necessary), and links to other supporting documents or external references. You should include images in your writeup to demonstrate how your code works with examples. 10 | 11 | All that said, please be concise! We're not looking for you to write a book here, just a brief description of how you passed each rubric point, and references to the relevant code :). 12 | 13 | You're not required to use markdown for your writeup. If you use another method please just submit a pdf of your writeup. 14 | 15 | The Project 16 | --- 17 | 18 | The goals / steps of this project are the following: 19 | 20 | * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. 21 | * Apply a distortion correction to raw images. 22 | * Use color transforms, gradients, etc., to create a thresholded binary image. 23 | * Apply a perspective transform to rectify binary image ("birds-eye view"). 24 | * Detect lane pixels and fit to find the lane boundary. 25 | * Determine the curvature of the lane and vehicle position with respect to center. 26 | * Warp the detected lane boundaries back onto the original image. 27 | * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. 28 | 29 | The images for camera calibration are stored in the folder called `camera_cal`. The images in `test_images` are for testing your pipeline on single frames. To help the reviewer examine your work, please save examples of the output from each stage of your pipeline in the folder called `ouput_images`, and include a description in your writeup for the project of what each image shows. The video called `project_video.mp4` is the video your pipeline should work well on. 30 | 31 | The `challenge_video.mp4` video is an extra (and optional) challenge for you if you want to test your pipeline under somewhat trickier conditions. The `harder_challenge.mp4` video is another optional challenge and is brutal! 32 | 33 | If you're feeling ambitious (again, totally optional though), don't stop there! We encourage you to go out and take video of your own, calibrate your camera and show us how you would implement this project from scratch! 34 | -------------------------------------------------------------------------------- /camera_cal/calibration1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration1.jpg -------------------------------------------------------------------------------- /camera_cal/calibration10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration10.jpg -------------------------------------------------------------------------------- /camera_cal/calibration11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration11.jpg -------------------------------------------------------------------------------- /camera_cal/calibration12.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration12.jpg -------------------------------------------------------------------------------- /camera_cal/calibration13.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration13.jpg -------------------------------------------------------------------------------- /camera_cal/calibration14.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration14.jpg -------------------------------------------------------------------------------- /camera_cal/calibration15.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration15.jpg -------------------------------------------------------------------------------- /camera_cal/calibration16.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration16.jpg -------------------------------------------------------------------------------- /camera_cal/calibration17.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration17.jpg -------------------------------------------------------------------------------- /camera_cal/calibration18.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration18.jpg -------------------------------------------------------------------------------- /camera_cal/calibration19.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration19.jpg -------------------------------------------------------------------------------- /camera_cal/calibration2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration2.jpg -------------------------------------------------------------------------------- /camera_cal/calibration20.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration20.jpg -------------------------------------------------------------------------------- /camera_cal/calibration3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration3.jpg -------------------------------------------------------------------------------- /camera_cal/calibration4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration4.jpg -------------------------------------------------------------------------------- /camera_cal/calibration5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration5.jpg -------------------------------------------------------------------------------- /camera_cal/calibration6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration6.jpg -------------------------------------------------------------------------------- /camera_cal/calibration7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration7.jpg -------------------------------------------------------------------------------- /camera_cal/calibration8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration8.jpg -------------------------------------------------------------------------------- /camera_cal/calibration9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/camera_cal/calibration9.jpg -------------------------------------------------------------------------------- /data/imgpoints.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/data/imgpoints.npy -------------------------------------------------------------------------------- /data/objpoints.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/data/objpoints.npy -------------------------------------------------------------------------------- /data/shape.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/data/shape.npy -------------------------------------------------------------------------------- /example_writeup.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/example_writeup.pdf -------------------------------------------------------------------------------- /examples/.ipynb_checkpoints/example-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [], 3 | "metadata": {}, 4 | "nbformat": 4, 5 | "nbformat_minor": 1 6 | } 7 | -------------------------------------------------------------------------------- /examples/binary_combo_example.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/examples/binary_combo_example.jpg -------------------------------------------------------------------------------- /examples/color_fit_lines.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/examples/color_fit_lines.jpg -------------------------------------------------------------------------------- /examples/example.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## Advanced Lane Finding Project\n", 8 | "\n", 9 | "The goals / steps of this project are the following:\n", 10 | "\n", 11 | "* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.\n", 12 | "* Apply a distortion correction to raw images.\n", 13 | "* Use color transforms, gradients, etc., to create a thresholded binary image.\n", 14 | "* Apply a perspective transform to rectify binary image (\"birds-eye view\").\n", 15 | "* Detect lane pixels and fit to find the lane boundary.\n", 16 | "* Determine the curvature of the lane and vehicle position with respect to center.\n", 17 | "* Warp the detected lane boundaries back onto the original image.\n", 18 | "* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.\n", 19 | "\n", 20 | "---\n", 21 | "## First, I'll compute the camera calibration using chessboard images" 22 | ] 23 | }, 24 | { 25 | "cell_type": "code", 26 | "execution_count": 3, 27 | "metadata": { 28 | "collapsed": true 29 | }, 30 | "outputs": [], 31 | "source": [ 32 | "import numpy as np\n", 33 | "import cv2\n", 34 | "import glob\n", 35 | "import matplotlib.pyplot as plt\n", 36 | "%matplotlib qt\n", 37 | "\n", 38 | "# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\n", 39 | "objp = np.zeros((6*9,3), np.float32)\n", 40 | "objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)\n", 41 | "\n", 42 | "# Arrays to store object points and image points from all the images.\n", 43 | "objpoints = [] # 3d points in real world space\n", 44 | "imgpoints = [] # 2d points in image plane.\n", 45 | "\n", 46 | "# Make a list of calibration images\n", 47 | "images = glob.glob('../camera_cal/calibration*.jpg')\n", 48 | "\n", 49 | "# Step through the list and search for chessboard corners\n", 50 | "for fname in images:\n", 51 | " img = cv2.imread(fname)\n", 52 | " gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n", 53 | "\n", 54 | " # Find the chessboard corners\n", 55 | " ret, corners = cv2.findChessboardCorners(gray, (9,6),None)\n", 56 | "\n", 57 | " # If found, add object points, image points\n", 58 | " if ret == True:\n", 59 | " objpoints.append(objp)\n", 60 | " imgpoints.append(corners)\n", 61 | "\n", 62 | " # Draw and display the corners\n", 63 | " img = cv2.drawChessboardCorners(img, (9,6), corners, ret)\n", 64 | " cv2.imshow('img',img)\n", 65 | " cv2.waitKey(500)\n", 66 | "\n", 67 | "cv2.destroyAllWindows()" 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "metadata": {}, 73 | "source": [ 74 | "## And so on and so forth..." 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": null, 80 | "metadata": { 81 | "collapsed": true 82 | }, 83 | "outputs": [], 84 | "source": [] 85 | } 86 | ], 87 | "metadata": { 88 | "anaconda-cloud": {}, 89 | "kernelspec": { 90 | "display_name": "Python [conda root]", 91 | "language": "python", 92 | "name": "conda-root-py" 93 | }, 94 | "language_info": { 95 | "codemirror_mode": { 96 | "name": "ipython", 97 | "version": 3 98 | }, 99 | "file_extension": ".py", 100 | "mimetype": "text/x-python", 101 | "name": "python", 102 | "nbconvert_exporter": "python", 103 | "pygments_lexer": "ipython3", 104 | "version": "3.5.2" 105 | } 106 | }, 107 | "nbformat": 4, 108 | "nbformat_minor": 1 109 | } 110 | -------------------------------------------------------------------------------- /examples/example.py: -------------------------------------------------------------------------------- 1 | def warper(img, src, dst): 2 | 3 | # Compute and apply perpective transform 4 | img_size = (img.shape[1], img.shape[0]) 5 | M = cv2.getPerspectiveTransform(src, dst) 6 | warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_NEAREST) # keep same size as input image 7 | 8 | return warped 9 | -------------------------------------------------------------------------------- /examples/example_output.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/examples/example_output.jpg -------------------------------------------------------------------------------- /examples/undistort_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/examples/undistort_output.png -------------------------------------------------------------------------------- /examples/warped_straight_lines.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/examples/warped_straight_lines.jpg -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | from matplotlib import pyplot as plt 3 | from moviepy.video.io.VideoFileClip import VideoFileClip 4 | from scipy import misc 5 | 6 | from polydrawer import Polydrawer 7 | from polyfitter import Polyfitter 8 | from thresholder import Thresholder 9 | from undistorter import Undistorter 10 | from warper import Warper 11 | 12 | undistorter = Undistorter() 13 | thresholder = Thresholder() 14 | warper = Warper() 15 | polyfitter = Polyfitter() 16 | polydrawer = Polydrawer() 17 | 18 | 19 | def main(): 20 | # video = 'harder_challenge_video' 21 | # video = 'challenge_video' 22 | video = 'project_video' 23 | white_output = '{}_done_2.mp4'.format(video) 24 | clip1 = VideoFileClip('{}.mp4'.format(video)).subclip(30, 51) 25 | white_clip = clip1.fl_image(process_image) # NOTE: this function expects color images!! 26 | white_clip.write_videofile(white_output, audio=False) 27 | 28 | 29 | def process_image(base): 30 | fig = plt.figure(figsize=(10, 8)) 31 | i = 1 32 | 33 | undistorted = undistorter.undistort(base) 34 | misc.imsave('output_images/undistorted.jpg', undistorted) 35 | # i = show_image(fig, i, undistorted, 'Undistorted', 'gray') 36 | 37 | img = thresholder.threshold(undistorted) 38 | misc.imsave('output_images/thresholded.jpg', img) 39 | # i = show_image(fig, i, img, 'Thresholded', 'gray') 40 | 41 | img = warper.warp(img) 42 | misc.imsave('output_images/warped.jpg', img) 43 | # i = show_image(fig, i, img, 'Warped', 'gray') 44 | 45 | left_fit, right_fit = polyfitter.polyfit(img) 46 | 47 | img = polydrawer.draw(undistorted, left_fit, right_fit, warper.Minv) 48 | misc.imsave('output_images/final.jpg', img) 49 | # show_image(fig, i, img, 'Final') 50 | 51 | # plt.show() 52 | # plt.get_current_fig_manager().frame.Maximize(True) 53 | 54 | lane_curve, car_pos = polyfitter.measure_curvature(img) 55 | 56 | if car_pos > 0: 57 | car_pos_text = '{}m right of center'.format(car_pos) 58 | else: 59 | car_pos_text = '{}m left of center'.format(abs(car_pos)) 60 | 61 | cv2.putText(img, "Lane curve: {}m".format(lane_curve.round()), (10, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, 62 | color=(255, 255, 255), thickness=2) 63 | cv2.putText(img, "Car is {}".format(car_pos_text), (10, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, color=(255, 255, 255), 64 | thickness=2) 65 | 66 | # show_image(fig, i, img, 'Final') 67 | # plt.imshow(img) 68 | # plt.show() 69 | 70 | return img 71 | 72 | 73 | def show_image(fig, i, img, title, cmap=None): 74 | a = fig.add_subplot(2, 2, i) 75 | plt.imshow(img, cmap) 76 | a.set_title(title) 77 | return i + 1 78 | 79 | 80 | if __name__ == '__main__': 81 | main() 82 | -------------------------------------------------------------------------------- /output_images/final.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/output_images/final.jpg -------------------------------------------------------------------------------- /output_images/polylines.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/output_images/polylines.jpg -------------------------------------------------------------------------------- /output_images/thresholded.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/output_images/thresholded.jpg -------------------------------------------------------------------------------- /output_images/undistorted.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/output_images/undistorted.jpg -------------------------------------------------------------------------------- /output_images/warped.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/output_images/warped.jpg -------------------------------------------------------------------------------- /polydrawer.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | 4 | 5 | class Polydrawer: 6 | def draw(self, img, left_fit, right_fit, Minv): 7 | color_warp = np.zeros_like(img).astype(np.uint8) 8 | 9 | fity = np.linspace(0, img.shape[0] - 1, img.shape[0]) 10 | left_fitx = left_fit[0] * fity ** 2 + left_fit[1] * fity + left_fit[2] 11 | right_fitx = right_fit[0] * fity ** 2 + right_fit[1] * fity + right_fit[2] 12 | 13 | # Recast the x and y points into usable format for cv2.fillPoly() 14 | pts_left = np.array([np.transpose(np.vstack([left_fitx, fity]))]) 15 | pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, fity])))]) 16 | pts = np.hstack((pts_left, pts_right)) 17 | pts = np.array(pts, dtype=np.int32) 18 | 19 | cv2.fillPoly(color_warp, pts, (0, 255, 0)) 20 | 21 | # Warp the blank back to original image space using inverse perspective matrix (Minv) 22 | newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0])) 23 | # Combine the result with the original image 24 | result = cv2.addWeighted(img, 1, newwarp, 0.3, 0) 25 | 26 | return result 27 | -------------------------------------------------------------------------------- /polyfitter.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | 4 | 5 | class Polyfitter: 6 | def __init__(self): 7 | self.left_fit = None 8 | self.right_fit = None 9 | self.leftx = None 10 | self.rightx = None 11 | 12 | def polyfit(self, img): 13 | # if self.left_fit is None: 14 | return self.polyfit_sliding(img) 15 | 16 | nonzero = img.nonzero() 17 | nonzeroy = np.array(nonzero[0]) 18 | nonzerox = np.array(nonzero[1]) 19 | margin = 100 20 | left_lane_inds = ( 21 | (nonzerox > (self.left_fit[0] * (nonzeroy ** 2) + self.left_fit[1] * nonzeroy + self.left_fit[2] - margin)) & ( 22 | nonzerox < (self.left_fit[0] * (nonzeroy ** 2) + self.left_fit[1] * nonzeroy + self.left_fit[2] + margin))) 23 | right_lane_inds = ( 24 | (nonzerox > (self.right_fit[0] * (nonzeroy ** 2) + self.right_fit[1] * nonzeroy + self.right_fit[2] - margin)) & ( 25 | nonzerox < (self.right_fit[0] * (nonzeroy ** 2) + self.right_fit[1] * nonzeroy + self.right_fit[2] + margin))) 26 | 27 | self.leftx = nonzerox[left_lane_inds] 28 | lefty = nonzeroy[left_lane_inds] 29 | self.rightx = nonzerox[right_lane_inds] 30 | righty = nonzeroy[right_lane_inds] 31 | self.left_fit = np.polyfit(lefty, self.leftx, 2) 32 | self.right_fit = np.polyfit(righty, self.rightx, 2) 33 | 34 | return self.left_fit, self.right_fit 35 | 36 | def polyfit_sliding(self, img): 37 | histogram = np.sum(img[int(img.shape[0] / 2):, :], axis=0) 38 | out_img = np.dstack((img, img, img)) * 255 39 | midpoint = np.int(histogram.shape[0] / 2) 40 | leftx_base = np.argmax(histogram[:midpoint]) 41 | rightx_base = np.argmax(histogram[midpoint:]) + midpoint 42 | 43 | nwindows = 9 44 | window_height = np.int(img.shape[0] / nwindows) 45 | nonzero = img.nonzero() 46 | nonzeroy = np.array(nonzero[0]) 47 | nonzerox = np.array(nonzero[1]) 48 | leftx_current = leftx_base 49 | rightx_current = rightx_base 50 | margin = 100 51 | minpix = 50 52 | left_lane_inds = [] 53 | right_lane_inds = [] 54 | 55 | for window in range(nwindows): 56 | win_y_low = img.shape[0] - (window + 1) * window_height 57 | win_y_high = img.shape[0] - window * window_height 58 | win_xleft_low = leftx_current - margin 59 | win_xleft_high = leftx_current + margin 60 | win_xright_low = rightx_current - margin 61 | win_xright_high = rightx_current + margin 62 | cv2.rectangle(out_img, (win_xleft_low, win_y_low), (win_xleft_high, win_y_high), (0, 255, 0), 2) 63 | cv2.rectangle(out_img, (win_xright_low, win_y_low), (win_xright_high, win_y_high), (0, 255, 0), 2) 64 | good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & ( 65 | nonzerox < win_xleft_high)).nonzero()[0] 66 | good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & ( 67 | nonzerox < win_xright_high)).nonzero()[0] 68 | left_lane_inds.append(good_left_inds) 69 | right_lane_inds.append(good_right_inds) 70 | if len(good_left_inds) > minpix: 71 | leftx_current = np.int(np.mean(nonzerox[good_left_inds])) 72 | if len(good_right_inds) > minpix: 73 | rightx_current = np.int(np.mean(nonzerox[good_right_inds])) 74 | 75 | left_lane_inds = np.concatenate(left_lane_inds) 76 | right_lane_inds = np.concatenate(right_lane_inds) 77 | 78 | self.leftx = nonzerox[left_lane_inds] 79 | lefty = nonzeroy[left_lane_inds] 80 | self.rightx = nonzerox[right_lane_inds] 81 | righty = nonzeroy[right_lane_inds] 82 | 83 | self.left_fit = np.polyfit(lefty, self.leftx, 2) 84 | self.right_fit = np.polyfit(righty, self.rightx, 2) 85 | 86 | return self.left_fit, self.right_fit 87 | 88 | def measure_curvature(self, img): 89 | ploty = np.linspace(0, 719, num=720) # to cover same y-range as image 90 | quadratic_coeff = 3e-4 # arbitrary quadratic coefficient 91 | leftx = np.array([200 + (y ** 2) * quadratic_coeff + np.random.randint(-50, high=51) 92 | for y in ploty]) 93 | rightx = np.array([900 + (y ** 2) * quadratic_coeff + np.random.randint(-50, high=51) 94 | for y in ploty]) 95 | 96 | leftx = leftx[::-1] # Reverse to match top-to-bottom in y 97 | rightx = rightx[::-1] # Reverse to match top-to-bottom in y 98 | 99 | # left_fit = np.polyfit(ploty, leftx, 2) 100 | # right_fit = np.polyfit(ploty, rightx, 2) 101 | 102 | y_eval = np.max(ploty) 103 | # left_curverad = ((1 + (2 * left_fit[0] * y_eval + left_fit[1]) ** 2) ** 1.5) / np.absolute(2 * left_fit[0]) 104 | # right_curverad = ((1 + (2 * right_fit[0] * y_eval + right_fit[1]) ** 2) ** 1.5) / np.absolute(2 * right_fit[0]) 105 | # print(left_curverad, right_curverad) 106 | 107 | ym_per_pix = 30 / 720 # meters per pixel in y dimension 108 | xm_per_pix = 3.7 / 700 # meters per pixel in x dimension 109 | 110 | left_fit_cr = np.polyfit(ploty * ym_per_pix, leftx * xm_per_pix, 2) 111 | right_fit_cr = np.polyfit(ploty * ym_per_pix, rightx * xm_per_pix, 2) 112 | left_curverad = ((1 + (2 * left_fit_cr[0] * y_eval * ym_per_pix + left_fit_cr[1]) ** 2) ** 1.5) / np.absolute( 113 | 2 * left_fit_cr[0]) 114 | right_curverad = ( 115 | (1 + ( 116 | 2 * right_fit_cr[0] * y_eval * ym_per_pix + right_fit_cr[ 117 | 1]) ** 2) ** 1.5) / np.absolute( 118 | 2 * right_fit_cr[0]) 119 | # print(left_curverad, 'm', right_curverad, 'm') 120 | 121 | ratio = left_curverad / right_curverad 122 | if ratio < 0.66 or ratio > 1.5: 123 | print('Warning: shitty ratio {}'.format(ratio)) 124 | 125 | lane_leftx = self.left_fit[0] * (img.shape[0] - 1) ** 2 + self.left_fit[1] * (img.shape[0] - 1) + self.left_fit[2] 126 | lane_rightx = self.right_fit[0] * (img.shape[0] - 1) ** 2 + self.right_fit[1] * (img.shape[0] - 1) + self.right_fit[2] 127 | 128 | car_pos = ((img.shape[1] / 2) - ((lane_leftx + lane_rightx) / 2)) * xm_per_pix 129 | 130 | return (left_curverad + right_curverad) / 2, car_pos.round(2) 131 | -------------------------------------------------------------------------------- /project_video.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/project_video.mp4 -------------------------------------------------------------------------------- /project_video_done.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/project_video_done.mp4 -------------------------------------------------------------------------------- /test_images/straight_lines1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/test_images/straight_lines1.jpg -------------------------------------------------------------------------------- /test_images/straight_lines2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/test_images/straight_lines2.jpg -------------------------------------------------------------------------------- /test_images/test1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/test_images/test1.jpg -------------------------------------------------------------------------------- /test_images/test2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/test_images/test2.jpg -------------------------------------------------------------------------------- /test_images/test3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/test_images/test3.jpg -------------------------------------------------------------------------------- /test_images/test4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/test_images/test4.jpg -------------------------------------------------------------------------------- /test_images/test5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/test_images/test5.jpg -------------------------------------------------------------------------------- /test_images/test6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MehdiSv/AdvancedLineDetection/a3788f7f770332bcd5648c255f39fe3ed32b2cba/test_images/test6.jpg -------------------------------------------------------------------------------- /thresholder.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | 4 | 5 | class Thresholder: 6 | def __init__(self): 7 | self.sobel_kernel = 15 8 | 9 | self.thresh_dir_min = 0.7 10 | self.thresh_dir_max = 1.2 11 | 12 | self.thresh_mag_min = 50 13 | self.thresh_mag_max = 255 14 | 15 | def dir_thresh(self, sobelx, sobely): 16 | abs_sobelx = np.abs(sobelx) 17 | abs_sobely = np.abs(sobely) 18 | scaled_sobel = np.arctan2(abs_sobely, abs_sobelx) 19 | sxbinary = np.zeros_like(scaled_sobel) 20 | sxbinary[(scaled_sobel >= self.thresh_dir_min) & (scaled_sobel <= self.thresh_dir_max)] = 1 21 | 22 | return sxbinary 23 | 24 | def mag_thresh(self, sobelx, sobely): 25 | gradmag = np.sqrt(sobelx ** 2 + sobely ** 2) 26 | scale_factor = np.max(gradmag) / 255 27 | gradmag = (gradmag / scale_factor).astype(np.uint8) 28 | binary_output = np.zeros_like(gradmag) 29 | binary_output[(gradmag >= self.thresh_mag_min) & (gradmag <= self.thresh_mag_max)] = 1 30 | 31 | return binary_output 32 | 33 | def color_thresh(self, img): 34 | img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV) 35 | 36 | yellow_min = np.array([15, 100, 120], np.uint8) 37 | yellow_max = np.array([80, 255, 255], np.uint8) 38 | yellow_mask = cv2.inRange(img, yellow_min, yellow_max) 39 | 40 | white_min = np.array([0, 0, 200], np.uint8) 41 | white_max = np.array([255, 30, 255], np.uint8) 42 | white_mask = cv2.inRange(img, white_min, white_max) 43 | 44 | binary_output = np.zeros_like(img[:, :, 0]) 45 | binary_output[((yellow_mask != 0) | (white_mask != 0))] = 1 46 | 47 | filtered = img 48 | filtered[((yellow_mask == 0) & (white_mask == 0))] = 0 49 | 50 | return binary_output 51 | 52 | def threshold(self, img): 53 | sobelx = cv2.Sobel(img[:, :, 2], cv2.CV_64F, 1, 0, ksize=self.sobel_kernel) 54 | sobely = cv2.Sobel(img[:, :, 2], cv2.CV_64F, 0, 1, ksize=self.sobel_kernel) 55 | 56 | direc = self.dir_thresh(sobelx, sobely) 57 | mag = self.mag_thresh(sobelx, sobely) 58 | color = self.color_thresh(img) 59 | 60 | combined = np.zeros_like(direc) 61 | combined[((color == 1) & ((mag == 1) | (direc == 1)))] = 1 62 | 63 | return combined 64 | -------------------------------------------------------------------------------- /undistorter.py: -------------------------------------------------------------------------------- 1 | import glob 2 | 3 | import cv2 4 | import numpy as np 5 | 6 | 7 | class Undistorter: 8 | def __init__(self): 9 | try: 10 | self.objpoints = np.load('data/objpoints.npy') 11 | self.imgpoints = np.load('data/imgpoints.npy') 12 | self.shape = tuple(np.load('data/shape.npy')) 13 | except: 14 | self.objpoints = None 15 | self.imgpoints = None 16 | self.shape = None 17 | 18 | if self.objpoints is None or self.imgpoints is None or self.shape is None: 19 | self.find_corners() 20 | 21 | ret, self.mtx, self.dist, self.rvecs, self.tvecs = cv2.calibrateCamera(self.objpoints, self.imgpoints, 22 | self.shape, 23 | None, None) 24 | 25 | def find_corners(self): 26 | images = glob.glob('camera_cal/calibration*.jpg') 27 | base_objp = np.zeros((6 * 9, 3), np.float32) 28 | base_objp[:, :2] = np.mgrid[0:9, 0:6].T.reshape(-1, 2) 29 | self.objpoints = [] 30 | self.imgpoints = [] 31 | self.shape = None 32 | 33 | for imname in images: 34 | img = cv2.imread(imname) 35 | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) 36 | 37 | if self.shape is None: 38 | self.shape = gray.shape[::-1] 39 | 40 | print('Finding chessboard corners on {}'.format(imname)) 41 | ret, corners = cv2.findChessboardCorners(gray, (9, 6), None) 42 | 43 | if ret: 44 | self.objpoints.append(base_objp) 45 | self.imgpoints.append(corners) 46 | 47 | np.save('data/objpoints', self.objpoints) 48 | np.save('data/imgpoints', self.imgpoints) 49 | np.save('data/shape', self.shape) 50 | 51 | def undistort(self, img): 52 | return cv2.undistort(img, self.mtx, self.dist, None, self.mtx) 53 | -------------------------------------------------------------------------------- /warper.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | 4 | 5 | class Warper: 6 | def __init__(self): 7 | src = np.float32([ 8 | [580, 460], 9 | [700, 460], 10 | [1040, 680], 11 | [260, 680], 12 | ]) 13 | 14 | dst = np.float32([ 15 | [260, 0], 16 | [1040, 0], 17 | [1040, 720], 18 | [260, 720], 19 | ]) 20 | 21 | self.M = cv2.getPerspectiveTransform(src, dst) 22 | self.Minv = cv2.getPerspectiveTransform(dst, src) 23 | 24 | def warp(self, img): 25 | return cv2.warpPerspective( 26 | img, 27 | self.M, 28 | (img.shape[1], img.shape[0]), 29 | flags=cv2.INTER_LINEAR 30 | ) 31 | 32 | def unwarp(self, img): 33 | return cv2.warpPersective( 34 | img, 35 | self.Minv, 36 | (img.shape[1], img.shape[0]), 37 | flags=cv2.INTER_LINEAR 38 | ) -------------------------------------------------------------------------------- /writeup.md: -------------------------------------------------------------------------------- 1 | **Advanced Lane Finding Project** 2 | 3 | The goals / steps of this project are the following: 4 | 5 | * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. 6 | * Apply a distortion correction to raw images. 7 | * Use color transforms, gradients, etc., to create a thresholded binary image. 8 | * Apply a perspective transform to rectify binary image ("birds-eye view"). 9 | * Detect lane pixels and fit to find the lane boundary. 10 | * Determine the curvature of the lane and vehicle position with respect to center. 11 | * Warp the detected lane boundaries back onto the original image. 12 | * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. 13 | 14 | [//]: # (Image References) 15 | 16 | [image1]: ./output_images/undistorted.jpg "Undistorted" 17 | [image2]: ./output_images/thresholded.jpg "Thresholded" 18 | [image3]: ./output_images/warped.jpg "Warped" 19 | [image4]: ./output_images/polylines.jpg "Polylines" 20 | [image5]: ./output_images/final.jpg "Final" 21 | 22 | ## [Rubric](https://review.udacity.com/#!/rubrics/571/view) Points 23 | 24 | #### 1. Camera Calibration and Image Undistortion 25 | 26 | I used the provided 20 camera calibration images. I iterated on each of them, finding the chessboard corners and corresponding coordinates on the board using `cv2.findChessboardCorners()` in the `find_corners()` method of the [Undistorter](undistorter.py) class. 27 | 28 | I stored all of these results in a single array that I passed to `cv2.calibrateCamera()` to calibrate to camera. I stored the resulting matrix and distortion coefficients for later use through the `undistort()` method of the [Undistorter](undistorter.py) class. 29 | 30 | Example undistorted image: 31 | 32 | ![alt text][image1] 33 | 34 | #### 2. Image Filtering 35 | 36 | I used a combination of color and gradient thresholds to generate a binary image (thresholding steps at methods `dir_thresh`, `mag_thresh`,`color_thresh` in the [Thresholder](thresholder.py) class). 37 | 38 | I first applied a color filtering, keeping only white or yellow-ish pixels of the image. I then only kept pixels that either match a sufficent magnitude or direction threshold. 39 | 40 | Example filtered image: 41 | 42 | ![alt text][image2] 43 | 44 | #### 3. Image Warping 45 | 46 | I then warped the image to obtain a bird's-eye view of the road. 47 | 48 | To do so, I used the `cv2.getPerspectiveTransform()` method (in `__init__()` of the [Warper](warper.py) class) with these source and destination points: 49 | 50 | | Source | Destination | 51 | |:-------------:|:-------------:| 52 | | 580, 460 | 260, 0 | 53 | | 700, 460 | 1040, 0 | 54 | | 1040, 680 | 1040, 720 | 55 | | 260, 680 | 260, 720 | 56 | 57 | 58 | ![alt text][image3] 59 | 60 | #### 4. Lane detection 61 | 62 | I first computed the histogram of the picture on its lower half to find the rough position of each lane line (lines 58 through 62 of the [Polyfitter](polyfitter.py) class). 63 | 64 | I then ran a sliding window vertically to detect the position of the center of each lane line on each part of the image (lines 64 through 97 of the [Polyfitter](polyfitter.py) class). 65 | 66 | I then used these positions to compute polylines describing each lane line using the `np.polyfit()` method (lines 104 and 105 of the [Polyfitter](polyfitter.py) class) 67 | 68 | ![alt text][image4] 69 | 70 | #### 5. Measuring lane curvature 71 | 72 | I used the computed polyline along with the estimated lane width (~3.7m) to compute the real-world lane curvature. 73 | I also measured the position of the car in respect to the lane by computing the difference between the center of the lane and the center of the image. 74 | 75 | These computations can be found in the `measure_curvature()` method of the [Polyfitter](polyfitter.py) class 76 | 77 | #### 6. Displaying the detected lane 78 | 79 | I then created a polygon using the curves of each computed polyline and warped back the result using the reversed source and destination of step #3. 80 | 81 | I finally drew this polygon on the undistorted image. 82 | 83 | ![alt text][image5] 84 | 85 | --- 86 | 87 | ### Pipeline (video) 88 | 89 | Here's a [link to my video result](./project_video_done.mp4) 90 | 91 | --- 92 | 93 | ### Discussion 94 | 95 | #### 1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust? 96 | 97 | The implementation overall went pretty smoothly. 98 | 99 | It might fail in extreme lighting conditions or in countries with lanes line colors different than white and yellow, or if they are not well visually defined (worn out or missing). 100 | 101 | I could make it more robust by handling more lightning conditions and lane line colors, and also by adding recovery options in case I dont detect any lane line, or if they differ too much from the previously detected lines. 102 | -------------------------------------------------------------------------------- /writeup_template.md: -------------------------------------------------------------------------------- 1 | ##Writeup Template 2 | ###You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer. 3 | 4 | --- 5 | 6 | **Advanced Lane Finding Project** 7 | 8 | The goals / steps of this project are the following: 9 | 10 | * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. 11 | * Apply a distortion correction to raw images. 12 | * Use color transforms, gradients, etc., to create a thresholded binary image. 13 | * Apply a perspective transform to rectify binary image ("birds-eye view"). 14 | * Detect lane pixels and fit to find the lane boundary. 15 | * Determine the curvature of the lane and vehicle position with respect to center. 16 | * Warp the detected lane boundaries back onto the original image. 17 | * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. 18 | 19 | [//]: # (Image References) 20 | 21 | [image1]: ./examples/undistort_output.png "Undistorted" 22 | [image2]: ./test_images/test1.jpg "Road Transformed" 23 | [image3]: ./examples/binary_combo_example.jpg "Binary Example" 24 | [image4]: ./examples/warped_straight_lines.jpg "Warp Example" 25 | [image5]: ./examples/color_fit_lines.jpg "Fit Visual" 26 | [image6]: ./examples/example_output.jpg "Output" 27 | [video1]: ./project_video.mp4 "Video" 28 | 29 | ## [Rubric](https://review.udacity.com/#!/rubrics/571/view) Points 30 | ###Here I will consider the rubric points individually and describe how I addressed each point in my implementation. 31 | 32 | --- 33 | ###Writeup / README 34 | 35 | ####1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. [Here](https://github.com/udacity/CarND-Advanced-Lane-Lines/blob/master/writeup_template.md) is a template writeup for this project you can use as a guide and a starting point. 36 | 37 | You're reading it! 38 | ###Camera Calibration 39 | 40 | ####1. Briefly state how you computed the camera matrix and distortion coefficients. Provide an example of a distortion corrected calibration image. 41 | 42 | The code for this step is contained in the first code cell of the IPython notebook located in "./examples/example.ipynb" (or in lines # through # of the file called `some_file.py`). 43 | 44 | I start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, `objp` is just a replicated array of coordinates, and `objpoints` will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. `imgpoints` will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection. 45 | 46 | I then used the output `objpoints` and `imgpoints` to compute the camera calibration and distortion coefficients using the `cv2.calibrateCamera()` function. I applied this distortion correction to the test image using the `cv2.undistort()` function and obtained this result: 47 | 48 | ![alt text][image1] 49 | 50 | ###Pipeline (single images) 51 | 52 | ####1. Provide an example of a distortion-corrected image. 53 | To demonstrate this step, I will describe how I apply the distortion correction to one of the test images like this one: 54 | ![alt text][image2] 55 | ####2. Describe how (and identify where in your code) you used color transforms, gradients or other methods to create a thresholded binary image. Provide an example of a binary image result. 56 | I used a combination of color and gradient thresholds to generate a binary image (thresholding steps at lines # through # in `another_file.py`). Here's an example of my output for this step. (note: this is not actually from one of the test images) 57 | 58 | ![alt text][image3] 59 | 60 | ####3. Describe how (and identify where in your code) you performed a perspective transform and provide an example of a transformed image. 61 | 62 | The code for my perspective transform includes a function called `warper()`, which appears in lines 1 through 8 in the file `example.py` (output_images/examples/example.py) (or, for example, in the 3rd code cell of the IPython notebook). The `warper()` function takes as inputs an image (`img`), as well as source (`src`) and destination (`dst`) points. I chose the hardcode the source and destination points in the following manner: 63 | 64 | ``` 65 | src = np.float32( 66 | [[(img_size[0] / 2) - 55, img_size[1] / 2 + 100], 67 | [((img_size[0] / 6) - 10), img_size[1]], 68 | [(img_size[0] * 5 / 6) + 60, img_size[1]], 69 | [(img_size[0] / 2 + 55), img_size[1] / 2 + 100]]) 70 | dst = np.float32( 71 | [[(img_size[0] / 4), 0], 72 | [(img_size[0] / 4), img_size[1]], 73 | [(img_size[0] * 3 / 4), img_size[1]], 74 | [(img_size[0] * 3 / 4), 0]]) 75 | 76 | ``` 77 | This resulted in the following source and destination points: 78 | 79 | | Source | Destination | 80 | |:-------------:|:-------------:| 81 | | 585, 460 | 320, 0 | 82 | | 203, 720 | 320, 720 | 83 | | 1127, 720 | 960, 720 | 84 | | 695, 460 | 960, 0 | 85 | 86 | I verified that my perspective transform was working as expected by drawing the `src` and `dst` points onto a test image and its warped counterpart to verify that the lines appear parallel in the warped image. 87 | 88 | ![alt text][image4] 89 | 90 | ####4. Describe how (and identify where in your code) you identified lane-line pixels and fit their positions with a polynomial? 91 | 92 | Then I did some other stuff and fit my lane lines with a 2nd order polynomial kinda like this: 93 | 94 | ![alt text][image5] 95 | 96 | ####5. Describe how (and identify where in your code) you calculated the radius of curvature of the lane and the position of the vehicle with respect to center. 97 | 98 | I did this in lines # through # in my code in `my_other_file.py` 99 | 100 | ####6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly. 101 | 102 | I implemented this step in lines # through # in my code in `yet_another_file.py` in the function `map_lane()`. Here is an example of my result on a test image: 103 | 104 | ![alt text][image6] 105 | 106 | --- 107 | 108 | ###Pipeline (video) 109 | 110 | ####1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!). 111 | 112 | Here's a [link to my video result](./project_video.mp4) 113 | 114 | --- 115 | 116 | ###Discussion 117 | 118 | ####1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust? 119 | 120 | Here I'll talk about the approach I took, what techniques I used, what worked and why, where the pipeline might fail and how I might improve it if I were going to pursue this project further. 121 | 122 | --------------------------------------------------------------------------------