While CAMShift adds complexity to Meanshift, the implementation of the the preceding program using CAMShift is surprisingly (or not?) similar to the Meanshift example, with the main difference being that, after the call to CamShift
, the rectangle is drawn with a particular rotation that follows the rotation of the object being tracked.
Here's the code reimplemented with CAMShift:
import numpy as np import cv2 cap = cv2.VideoCapture(0) # take first frame of the video ret,frame = cap.read() # setup initial location of window r,h,c,w = 300,200,400,300 # simply hardcoded the values track_window = (c,r,w,h) roi = frame[r:r+h, c:c+w] hsv_roi = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) mask = cv2.inRange(hsv_roi, np.array((100., 30.,32.)), np.array((180.,120.,255.))) roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180]) cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX) term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 ) while(1): ret ,frame = cap.read() if ret == True: hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1) ret, track_window = cv2.CamShift(dst, track_window, term_crit) pts = cv2.boxPoints(ret) pts = np.int0(pts) img2 = cv2.polylines(frame,[pts],True, 255,2) cv2.imshow('img2',img2) k = cv2.waitKey(60) & 0xff if k == 27: break else: break cv2.destroyAllWindows() cap.release()
The difference between the CAMShift code and the Meanshift one lies in these four lines:
ret, track_window = cv2.CamShift(dst, track_window, term_crit) pts = cv2.boxPoints(ret) pts = np.int0(pts) img2 = cv2.polylines(frame,[pts],True, 255,2)
The method signature of CamShift
is identical to Meanshift.
The boxPoints
function finds the vertices of a rotated rectangle, while the polylines function draws the lines of the rectangle on the frame.
By now, you should be familiar with the three approaches we adopted for tracking objects: basic motion detection, Meanshift, and CAMShift.
Let's now explore another technique: the Kalman filter.
18.217.254.118