Lane change detection of vehicle

Time:2021-10-16

Author | hitesh valecha
Compile VK
Source: towards Data Science

In this tutorial, we will learn how to use computer vision and image processing to detect whether a car changes lanes on the road.

You must have heard that opencv Haar cascade can detect faces, eyes, cars, buses and other objects? This time, let’s use this simple detection method to build something cool.

1. Dataset

In this tutorial, video files of cars on the road are used as data sets. In addition, we can use the image data set to detect the vehicle in the image, but here, when the vehicle changes lanes, we will use the pop-up window to give an alarm, so for these dynamic information, video input is more feasible.

2. Input

We use the Haar cascade of OpenCV to detect the coordinates of the car, and the input is the video file of the car on the road

cascade_src = 'cascade/cars.xml'
video_src = 'dataset/cars.mp4'

cap = cv2.VideoCapture(video_src)
car_cascade = cv2.CascadeClassifier(cascade_src)

The CV2. Videocapture () method is used to capture the input video. A video is usually 25 frames per second (FPS). After capturing the input, the frame is extracted in a loop, and Haar cascade detection is used to draw a rectangle around the car in the loop to obtain consistency. At the same time, other operations are performed on the captured frame.

while(1):
    #Get each frame
    _, frame = cap.read()
    cars = car_cascade.detectMultiScale(frame, 1.1, 1)
    for (x,y,w,h) in cars:
        ROI = cv2.rectangle (frame, (x, y), (x + W, y + H), (0,0255), 2) # region of interest

BGR is used in opencv instead of RGB, so (0,0255) will draw a red rectangle on the car instead of blue.

3. Image processing

We use frames, but if the resolution of the frame is very high, it will slow down the operation. In addition, the frame contains noise, which can be reduced by blur. Here, Gaussian blur is used.

Now let’s look at some of the concepts of image processing

3.1 HSV framework

In this article, we use the HSV frame obtained from the frames captured by CV2. Videocapture(), only highlight the turning points of the vehicle, and cover the remaining roads and vehicles driving in a straight line on the road. Set the upper and lower limits to define the color range in the HSV to view the points where the car changes lanes and use them as a mask for the frame. Here is a code snippet to get this information-

#Using blur to eliminate noise in video frames
frame = cv2.GaussianBlur(frame,(21,21),0)

#Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

#Define the color range in the HSV to see the points where the vehicle changes angle
lower_limit = np.array([0,150,150])
upper_limit = np.array([10,255,255])

#HSV image limit threshold
mask = cv2.inRange(hsv, lower_limit, upper_limit)

3.2 corrosion and expansion

Corrosion and swelling are two basic morphological operations for image processing. The corrosion operator has a local minimum effect on the region of the kernel function, which is a template or mask. Corrosion is used to reduce speckle noise in the image. Dilation is the convolution of image and kernel, which has the function of local maximum operator. When pixels are added to the boundary of smooth objects in the image, expansion is applied to recover some lost areas.

The mask generated from the first step of HSV frame is now processed with basic morphological operations (corrosion and expansion). The generated frame obtains ROI (region of interest) by bitwise sum operation between frame and mask.

kernel = np.ones((3,3),np.uint8)
kernel_lg = np.ones((15,15),np.uint8)

#Image processing technology called corrosion is used for noise reduction
mask = cv2.erode(mask,kernel,iterations = 1)

#Image processing technology called dilation is used to recover the lost part of the area
mask = cv2.dilate(mask,kernel_lg,iterations = 1)

#Except for the area of interest, everything else turns black
result = cv2.bitwise_and(frame,frame, mask= mask)

3.3 lane detection

The canny edge detection operator combined with Hough line transform is used for lane detection.

#Lane detection
def canny(frame):
    gray=cv2.cvtColor(frame,cv2.COLOR_RGB2GRAY)
    blur=cv2.GaussianBlur(gray,(5,5),0)
    canny=cv2.Canny(blur,50,150)
    return canny
    
def region_of_interest(frame):
    height=frame.shape[0]
    polygons=np.array([
    [(0,height),(500,0),(800,0),(1300,550),(1100,height)]
    ])
    mask=np.zeros_like(frame)
    cv2.fillPoly(mask,polygons,255)
    masked_image=cv2.bitwise_and(frame,mask)
    return masked_image
    
def display_lines(frame,lines):
    line_image=np.zeros_like(frame)
    if lines is not None:
    for line in lines:
    x1,y1,x2,y2=line.reshape(4)
    cv2.line(line_image, (x1, y1), (x2, y2), (0, 255, 0), 3)
    return line_image

lane_image=np.copy(frame)
canny=canny(lane_image)
cropped_image=region_of_interest(canny)
lines=cv2.HoughLinesP(cropped_image,2,np.pi/180,100,np.array([]),minLineLength=5,maxLineGap=300)
line_image=display_lines(lane_image,lines)
frame=cv2.addWeighted(lane_image,0.8,line_image,1,1)
cv2.imshow('video', frame)

4. Contour

Canny edge detector and other algorithms are used to find the edge boundary in the edge pixel image, but it does not tell us how to find the points and edges that cannot be combined by objects or entities. Here, we can use CV2. Findcontours () implemented by opencv as the concept of contour.

definition-The outline represents a list of points of the curve in the image

The contour is represented by sequences, and each sequence encodes the position information of the next point. We run CV2. Findcontents () several times in ROI to get the entity, and then use CV2. Drawcontours () to draw the contour area. Contours can be points, edges, polygons, etc., so when drawing contours, we make polygon approximation to calculate the length of edges and the area of regions.

The function CV2. Drawcontours () works by drawing a tree (data structure) from the root node, and then connecting subsequent points, bounding boxes, andFreemanChain code.

thresh = mask
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

#Define the minimum area of the profile (ignore all values below min)
min_area = 1000
cont_filtered = []

#Filter out all contours below the minimum area
for cont in contours:
  if cv2.contourArea(cont) > min_area:
    cont_filtered.append(cont)
    
cnt = cont_filtered[0]

#Draw a rectangle around the outline
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(frame,[box],0,(0,0,255),2)
rows,cols = thresh.shape[:2]
[vx,vy,x,y] = cv2.fitLine(cnt, cv2.DIST_L2,0,0.01,0.01)
lefty = int((-x*vy/vx) + y)
righty = int(((cols-x)*vy/vx)+y)
cv2.line(frame,(cols-1,righty),(0,lefty),(0,255,0),2)

Another important task after finding contours is to match them. Matching contours means that we have two independent calculated contours to compare with each other, or one contour to compare with an abstract template.

5. Characteristic moment

We can compare two contours by calculating the contour moment. “The feature moment is the total feature of the contour and is calculated by adding all pixels of the contour.”

Torque type

Spatial characteristic moment:m00, m10, m01, m20, m11, m02, m30, m21, m12, m03.

Central characteristic moment:mu20, mu11, mu02, mu30, mu21, mu12, mu03.

Hu characteristic moment: there are seven Hu characteristic moments (h0-h6) or (h1-h7), both of which are used.

We use CV2. Fitellipse () to calculate the characteristic moment and fit the ellipse at points. Find the angle from the contour and characteristic moment, because changing the lane requires 45 degrees of rotation, which is regarded as the threshold of vehicle turning angle.

M = cv2.moments(cnt)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
(x,y),(MA,ma),angle = cv2.fitEllipse(cnt)
print('x= ', cx, '  y= ', cy, ' angle = ', round(rect[2],2))
if(round(rect[2],2))

Now, we can use Tkinter to create a simple pop-up window to warn about changes, not just print the detection results of changes.

if(round(rect[2],2))

Draw a rectangle on the frame and measure the angle with a green line

6. Summary and future

In this tutorial, we will explore a small demonstration of intelligent vehicle navigation using lane change detection method.

Computer vision is developing rapidly. Its application is not only in the local navigation of cars, but also in the field of navigation and product detection on Mars. Even in medical applications, it is also being developed and used for early detection of cancer and tumor in X-ray images.

Click here to get the source code for the GitHub account:https://github.com/Hitesh-Valecha/Car_Opencv

reference

  • Bradski, Gary and Kaehler, Adrian, Learning OpenCV: Computer Vision in C++ with the OpenCV Library, O’Reilly Media, Inc., 2nd edition, 2013, @10.5555/2523356, ISBN — 1449314651.
  • Laganiere, Robert, OpenCV Computer Vision Application Programming Cookbook, Packt Publishing, 2nd edition, 2014, @10.5555/2692691, ISBN — 1782161481.

Original link:https://towardsdatascience.com/lane-change-detection-computer-vision-at-next-stage-914973f96f4b

Welcome to panchuang AI blog:
http://panchuang.net/

Official Chinese document of sklearn machine learning:
http://sklearn123.com/

Welcome to panchuang blog resources summary station:
http://docs.panchuang.net/