Introduction to OpenNI framework

Over the past 2 months, I have been using the Asus Xtion Pro Live with OpenNI.Asus Xtion Pro Live is basically another form of Kinect,but it is completely PC compatible.Hence you don’t need extra cables that are required with Microsoft Kinect. Both Kinect and Xtion Pro follow what is called the PrimeSense Reference Design.The image below explains the reference design pretty well.The design was built by an Israeli company called PrimeSense. MicroSoft adapted this design and added a few extra features such as motorized tilt and Gyroscope(not sure about this one) and sold the package as Kinect for Xbox.

The robotics community went crazy over a device that could build depth maps of the environment at a much lesser price than any device present at that time.A lot of hacks,drivers,middlewares were developed to make Kinect PC compatible and use the raw data sent by Kinect. Finally PrimeSense decided to collaborate with Asus to produce a PC compatible version specifically for Robotics/CV uses with USB 2.0 support.PrimeSense also made a programming framework called the OpenNI. OpenNI works with any device following the PSRD design (including Kinect and Xtion Pro).

Assuming you have installed all packages as intended,here I present a gist of how to go about starting to program in the OpenNI framework.

The main header file if you are working in the C++ framework is XnCppWrapper.The C header is XnOpenNI.h. All other languages are just wrappers around the C framework.Hence the name XnCppWrapper.


#include <XnCppWrapper.h>

using namespace xn;

The entry point to any OpenNI program is a context object. You can read more about Context on the OpenNI website. Then you need to initiate the context object. This can be done by 3 methods.

  1. Context.Init()
  2. Context.InitFromXmlFile() : Initialize from an XML file such as the SampleConfig.xml provided with OpenNI.This basically sets the values of a lot of parameters.eg You can edit the xml file and set whether you want to take IR/Image datastream from Kinect. Or the FPS,XResolution and YResolution of any image stream.Or whether the image should be mirrored. Initialization from this xml file will affect the kind of data OpenNI gets from kinect.
  3. Context.OpenFileRecording(“files.oni”) : Opens and reads an Oni file.An oni file is a recording file,just like a video is a recorder file of images.You can use oni file when you don’t have a kinect to work with.Just like you can work on a video when you don’t have a camera.oni files are found on openni’s website.A few oni files that I recorded  can  also be downloaded from my github page,the link to which I will share later.

xn::Context g_Context;
g_Context.Init();
g_Context.OpenFileRecording("skeleton_dep.oni");
/*An oni file where I recorded a depth data stream.Any data stream RGB,IR,Depth,Audio can be recorded on an oni file using xn::Recorder and played later to use */

Our next aim is to get the production nodes from the initiated context object.Production nodes are basically nodes/elements that produce data of a specific type.Irrespective of the method used to Initiate the Context,we always use the same method to find production nodes.That  is one of the powers of OpenNI.It removes the dependency between application , harware and middleware.Whether you use Asus,Kinect or an oni file or anything else,method to produce data (Production node) remains the same.
The internal algorithms used by openni to produce user data (Joint Positions,CoM etc) are dependent on the depth stream.Hence to generate user,you need to generate depth. Additionally ImageGenerator produces RGB Images while IRGenerator produces IR Images.Then you just ask the context to start generating data.In the infinite loop, using an Wait_Update_() function waits for data to be updated (new data to come from kinect) before running the loop.


xn::DepthGenerator g_DepthGenerator; //Produces depth stream
xn::UserGenerator g_UserGenerator;     //Produces User data from depth stream

g_Context.FindExistingNode(XN_NODE_TYPE_DEPTH, g_DepthGenerator);
g_Context.FindExistingNode(XN_NODE_TYPE_USER, g_UserGenerator);
xn::DepthMetaData g_depthMD ;
g_DepthGenerator.GetMetaData(g_depthMD); //Gets the data out of the depth stream.

g_Context.StartGeneratingAll();

while(!xnOSWasKeyboardHit())
{

g_Context.WaitOneUpdateAll(g_UserGenerator);
//Wait for new User Data

//Continue processing your data here.
}

Once you start data generation,there are n number of ways to work on them.I decided to generate the depth images,which are black and white by default,use OpenCV for post processing to generate a coloured “Depth Histogram”.Also using the joint locations, draw lines between them to produce a stick figure. Tracking can be started and stopped by pressing SpaceBar.

The final output looks something like this

The complete Code::Blocks project with a few oni file can be downloaded from my github repo at https://github.com/ritsz/OpenNI_C–B_Template

A gist of the main.cpp  file can be found here: git://gist.github.com/3848292.git

Advertisements

About ranjanritesh

I am currently working as a Software Engineer for Cisco Systems India in Bangalore. I’ve been working for Cisco since August 2013. I am a BITS Goa Alumni. I earned my Bachelors in Electronics and Instrumentation in August 2013 from there. My interests include Image Processing, Systems Programming and hacking on the Linux Kernel. I also like tinker a bit with Arduino and other microcontrollers.
This entry was posted in OpenCV, OpenNI, Tutorial and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s