Is开发者_StackOverflow there a way to show Kinect Depth Image into Full Screen mode? I\'m using C# and WPF, the OpenNI C++ example able to show the dept image in full size with out any stretch occur,
We are trying to simulate si开发者_StackOverflow中文版mple kinect output. I have rendered a triangle mesh in Matlab and now I want to get at the depth buffer of the figure/axis where the shape has be
I\'m using the Emgu wrapper for OpenCV in C#, and I have the following nagging problem. I\'m using the Code Laboratories Kinect API and the code to get an image out of the Kinect looks like this:
What are some of the algorithms involved in detecting user gestures based on skeleton movements?The ones I\'m aware of include:
I\'ve using Kinect and OpenCV (I am using c++). I can get both the RGB and the depth image. With the RGB image I can \"pla开发者_StackOverflow中文版y\" as usual, blurring it, using canny (after conver
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
What gestur开发者_JAVA百科e recognition libraries (if any) exist for the Kinect?Right now I\'m using OpenNI to record skeleton movements but am not sure how to go from that to triggering discrete acti
I am interested in using XBox Kinect device on Windows (eventually with Mac) machine together with Open NI/NITE.
I started playing with the Kinect and I would like to use skeleton tracking using OpenNI. Since my knowledge of c++ is limited, the easiest option is to use the ofxOpenNI addon for OpenFrameworks.
I just want to know whether identification of objects like humans and gestures of body partsis done b开发者_开发知识库y Kinect or by the Xbox 360.It\'s done on the Xbox.