DetectionBasedTracker : OpenCV implementation.

I was browsing through the header files in OpenCV 2.4.2 and found a lot of interesting things in the “opencv2/contrib” directory. The eigenFaces implementation ( a Principal Component Analysis method to recognize an object/face ) was earlier carried out using the cvEigenDecomposite() and the cvCalcEigenObjects() methods. Here is a link explaining how it was done earlier. But with the C++ implementation, “opencv2/conttrib/contrib.hpp” defines a FaceRecognizer class that has methods called train() and predict().

Another header file that I found was the “opencv2/contrib/detection_based_tracker.hpp”. It defines a class called DetectionBasedTracker. The tracking mechanism it defines uses haar cascades in the background to detect objects. The tracking is much faster than the OpenCV Haar implementation. But it is not very robust. The box disappears sometimes,but mainly due to fast/irregular movements.Still a decent option for tracking I suppose.

The first thing to do is to initialize the parameters you are going to use for tracking the object. Most of these parameters are the same as those required for Haar. So there won’t be much problems for that I suppose.

 DetectionBasedTracker::Parameters param;
 param.maxObjectSize = 400;
 param.maxTrackLifetime = 20;
 param.minDetectionPeriod = 7;
 param.minNeighbors = 3;
 param.minObjectSize = 20;
 param.scaleFactor = 1.1;

Play around with the values and check the results.Don’t forget to add the “opencv2/contrib/detection_based_tracker.hpp” header.

Next the object needs to be defined using the constructor with the above declared parameter structure. Then the method is called to initialize the tracking.

DetectionBasedTracker obj = DetectionBasedTracker("haarcascade_frontalface_alt.xml",param);;

Now for the infinite loop part. After grabbing a frame from the camera, convert it into grayscale and run the tracker on it.


The results of the tracker are stored in a vector of cv::Rect data structure. Go ahead and define a vector and pass it as an argument to the getObjects method.

vector< Rect_<int> > faces;


Now “faces” has the detected rectangle around all faces present in the frame. Go ahead, iterate over it, get individual rectangles and display then on the image.

for(int i = 0; i < faces.size(); i++)
face_i = faces[i];

// Make a rectangle around the detected object

rectangle(img, face_i, CV_RGB(0, 255,0), 3);


The code can be downloaded from . Just download the code compile and run. A video of my results are here  .As I said earlier, the tracker is not as good as haar cascades but is certainly a bit faster.


About ranjanritesh

I am currently working as a Software Engineer for Cisco Systems India in Bangalore. I’ve been working for Cisco since August 2013. I am a BITS Goa Alumni. I earned my Bachelors in Electronics and Instrumentation in August 2013 from there. My interests include Image Processing, Systems Programming and hacking on the Linux Kernel. I also like tinker a bit with Arduino and other microcontrollers.
This entry was posted in Open Source, OpenCV, Tutorial and tagged , , . Bookmark the permalink.

3 Responses to DetectionBasedTracker : OpenCV implementation.

  1. By the way, you can find the documentation for cv::FaceRecognizer at:

    • ranjanritesh says:

      Thanks,I know and I guess I should also thank you for making it :). Great work. Made my life a lot easier.I didn’t mention it cause I am doing a project for online training with cv::FaceRecognizer and thought I’d mention it when I blogged about that.

  2. Pingback: Tracking of detected faces using OpenCV on iOS | BlogoSfera

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s