Self-Driving Cars: Finding Lane Lines on the Road

One of the primary capability of a self-driving car for successful movement on the road requires it to detect the lane lines. This task is easily implementable using various computer vision techniques. In this project, a computer vision based pipeline is developed using Python for finding out the lane lines on the road. This pipeline can utilize images (or video)  taken from the car’s front camera to find lane lines.

Lane Detection Pipeline


As shown in the figure the pipeline consists of 6 steps:-

  1. Convert to GrayscaleThis will convert the image to a single channel for later use by the Canny Transform (Step 3) for finding the gradient of the pixel intensity.
  2. Apply Gaussian Smoothing (or Gaussian Blur)This step filters out the noise in the image.
  3. Apply Canny TransformThis is used to find the edges in the image. An edge is formed by pixels where the gradient of the image intensity sharply changes.
  4. Find the Region of Interest: This step separates out the region of image where the road lanes are possible to occur.
  5. Apply Hough TransformThis step separates out the lines in the image that probably form the lanes.
  6. Merge with the Original Image: This step merges the output of the hough transform to the original image.

The pipeline is implemented in python:

class image_processor(object):
    def __init__(self, img):
        self.img = img
    def grayscale(self):
        This will convert the RGB image (self.img) to a single channel image (self.gray).
        self.gray = cv2.cvtColor(self.img, cv2.COLOR_RGB2GRAY)
    def gaussian_blur(self):
        This reduce the noise in the image (self.gray).
        self.blur = cv2.GaussianBlur(self.gray, (self.kernel_size, self.kernel_size), 0)
    def canny_transform(self):
        This will seperate out the high gradient pixels of the image (self.blur)
        self.canny = cv2.Canny(self.blur, self.low_threshold, self.high_threshold)
    def ROI_mask(self):
        Applies an image mask (to self.canny).
        Only keeps the region of the image defined by the polygon
        formed from `vertices`. The rest of the image is set to black.
        ysize = self.canny.shape[0]
        xsize = self.canny.shape[1]
        vertices = [np.array([ [0, ysize], [xsize/2,(ysize/2)+ 10], [xsize,ysize] ], np.int32)]
        # Defining the image mask
        mask = np.zeros_like(self.canny)
        # Defining color to fill the mask depending on the number of channels in the image
        if len(self.canny.shape) > 2:
            channel_count = self.canny.shape[2]
            ignore_mask_color = (255,) * channel_count
            ignore_mask_color = 255
        # Filling the pixels inside the polygon (formed by "vertices") with the defined color
        cv2.fillPoly(mask, vertices, ignore_mask_color)
        self.masked = cv2.bitwise_and(self.canny, mask)

    def hough_transform(self):
        This will find out the probable lane lines in the image (self.masked).
        lines = cv2.HoughLinesP(self.masked, self.rho, self.theta, self.threshold, np.array([]), minLineLength=self.min_line_len, maxLineGap=self.max_line_gap)
        self.hough = np.zeros((self.masked.shape[0], self.masked.shape[1], 3), dtype=np.uint8)
        draw_lines(self.hough, lines)
    def weighted_img(self):
        Merges the initial_img (self.img) with hough_img (self.hough) using the formula:
        initial_img * α + hough_img * β + λ
        """ =  cv2.addWeighted(self.img, self.α, self.hough, self.β, self.λ)
    def find_lanes(self, kernel_size = 5, low_threshold = 50, high_threshold = 150, rho = 1, theta = np.pi/180, threshold = 15, min_line_len = 150, max_line_gap = 75, α=0.8, β=1.0, λ=0.0):
        kernal_size: Gaussian kernal parameters
        low_threshold, high_threshold: Canny Transform parameters
        rho, theta, threshold, min_line_len, max_line_gap: Hough Transform Parameters
        α, β, λ: Merge Function (cv2.addWeighted) parameters
        self.kernel_size = kernel_size
        self.low_threshold = low_threshold
        self.high_threshold = high_threshold
        self.rho = rho # distance resolution in pixels of the Hough grid
        self.theta = theta # angular resolution in radians of the Hough grid
        self.threshold = threshold     # minimum number of votes (intersections in Hough grid cell)
        self.min_line_len = min_line_len #minimum number of pixels making up a line
        self.max_line_gap = max_line_gap    # maximum gap in pixels between connectable line segments
    def save_images(self, path, name):
        plt.imsave(path + '/' + '1_orignal_'+ name , self.img)
        plt.imsave(path + '/' + '2_gray_'+ name , self.gray, cmap='gray')
        plt.imsave(path + '/' + '3_blur_'+ name , self.blur, cmap='gray')
        plt.imsave(path + '/' + '4_canny_'+ name , self.canny, cmap='gray')
        plt.imsave(path + '/' + '5_masked_'+ name , self.masked, cmap='gray')
        plt.imsave(path + '/' + '6_hough_'+ name , self.hough, cmap='gray')
        plt.imsave(path + '/' + '7_final_'+ name ,

The python implementation is tested using 6 test images and 2 test videos. The results can be seen below.

Test Images: Results

Each slider shows the effect of using various pipeline stages on the test images.

This slideshow requires JavaScript.

This slideshow requires JavaScript.

This slideshow requires JavaScript.

This slideshow requires JavaScript.

This slideshow requires JavaScript.

This slideshow requires JavaScript.

Test Videos: Results

Clearly, the pipeline is able to track the lane lines on the road.


Check my Github page for complete implementation.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s