Gesture recognition is a fundamental element when developing Kinect-based applications (or any other Natural User Interfaces). Gestures are used for navigation, interaction or data input. The most common gesture examples include waving, sweeping, zooming, joining hands, and much more. Unfortunately, the current Kinect for Windows SDK does not include a gesture-detection mechanism out of the box. So, you thought that recognizing gestures using Kinect is a pain in the ass? Not any more. Today I’ll show you how you can implement your own gestures using some really easy techniques. There is no need to be a Math guru or an Artificial Intelligence Yoda to build a simple gesture detection mechanism.
Prerequisites
- Kinect for Windows or Kinect for XBOX sensor
- Microsoft Kinect SDK (or OpenNI SDK with minor modifications)
What is a gesture?
Before implementing something, it is always good to define it. Kinect provides you with the position (X, Y and Z) of the users’ joints 30 times (or frames) per second. If some specific points move to specific relative positions for a given amount of time, then you have a gesture. So, in terms of Kinect, a gesture is the relative position of some joints for a given number of frames. Let’s take the wave gesture as an example. People wave by raising their left or right hand and moving it from side to side. Throughout the gesture, the hand usually remains above the elbow and moves periodically from left to right. Here is a graphical representation of the movement:
Now that you’ve seen and understood what a gesture is, let’s try to specify its underlying algorithm.
Gesture segments
In the wave gesture, the hand remains above the elbow and moves periodically from left to right. Each position (left / right) is a discrete part of the gesture. Formally, these parts are called segments.
So, the first segment would contain the conditions “hand above elbow” and “hand right of elbow”:
- Hand.Position.Y > Elbow.Position.Y AND
- Hand.Position.X > Elbow.Position.X
Similarly, the second segment would contain the conditions “hand above elbow” and “hand left of elbow”:
- Hand.Position.Y > Elbow.Position.Y AND
- Hand.Position.X < Elbow.Position.X
That’s it. If you notice any consecutive repeats of the above segments for at least three or four times, then the user is waving! In .NET, the source code would be really simple; just two classes representing each segment. Of course, each segment class should implement an Update method. The Update method determines whether the specified conditions are met for a given skeleton body. Returns Succeeded if every condition of the segment is met, or Failed if none of the conditions is met.
// WaveGestureSegments.cs
using Microsoft.Kinect;
namespace KinectSimpleGesture
{
public interface IGestureSegment
{
GesturePartResult Update(Skeleton skeleton);
}
public class WaveSegment1 : IGestureSegment
{
public GesturePartResult Update(Skeleton skeleton)
{
// Hand above elbow
if (skeleton.Joints[JointType.HandRight].Position.Y >
skeleton.Joints[JointType.ElbowRight].Position.Y)
{
// Hand right of elbow
if (skeleton.Joints[JointType.HandRight].Position.X >
skeleton.Joints[JointType.ElbowRight].Position.X)
{
return GesturePartResult.Succeeded;
}
}
// Hand dropped
return GesturePartResult.Failed;
}
}
public class WaveSegment2 : IGestureSegment
{
public GesturePartResult Update(Skeleton skeleton)
{
// Hand above elbow
if (skeleton.Joints[JointType.HandRight].Position.Y >
skeleton.Joints[JointType.ElbowRight].Position.Y)
{
// Hand left of elbow
if (skeleton.Joints[JointType.HandRight].Position.X <
skeleton.Joints[JointType.ElbowRight].Position.X)
{
return GesturePartResult.Succeeded;
}
}
// Hand dropped
return GesturePartResult.Failed;
}
}
}
The GesturePartResult is an enum (we could even use boolean values):
// GesturePartResult.cs
using System;
namespace KinectSimpleGesture
{
public enum GesturePartResult
{
Failed,
Succeeded
}
}
Note: For a more advanced example, we could use another GesturePartResult (lets say “Undetermined”), which would indicate that we are not sure about the current gesture result.
Updating the gesture
We now need a way to update and check the gesture every time the sensor provides us with new skeleton/body data. This kind of check will be done in a separate class and will be called 30 times per second, or at least as many times as our Kinect sensor allows. When updating a gesture, we check each segment and specify whether the movement is complete or whether we need to continue asking for data.
Window size
The number of frames we ask for data is called window size and you find it after experimenting with your code. For simple gestures that last for approximately a second, a window size of 30 or 50 will do the job just fine. For the wave gesture, I chose 50.
The gesture class
Having decided on the window size parameter, we can now build the WaveGesture class. Notice the process:
- In the constructor, we create the gesture parts and we specify their order in the the _segments array. You can use as many occurrences of each segment as you like!
- In the Update method, we keep track of the frame index and check every segment for success or failure.
- If we succeed, we throw the GestureRecognized event and reset the gesture
- If we fail or the window size has been reached, we reset the gesture and start over
Here is the final class for our wave gesture:
// WaveGesture.cs
using Microsoft.Kinect;
using System;
namespace KinectSimpleGesture
{
public class WaveGesture
{
readonly int WINDOW_SIZE = 50;
IGestureSegment[] _segments;
int _currentSegment = 0;
int _frameCount = 0;
public event EventHandler GestureRecognized;
public WaveGesture()
{
WaveSegment1 waveSegment1 = new WaveSegment1();
WaveSegment2 waveSegment2 = new WaveSegment2();
_segments = new IGestureSegment[]
{
waveSegment1,
waveSegment2,
waveSegment1,
waveSegment2,
waveSegment1,
waveSegment2
};
}
public void Update(Skeleton skeleton)
{
GesturePartResult result = _segments[_currentSegment].Update(skeleton);
if (result == GesturePartResult.Succeeded)
{
if (_currentSegment + 1 < _segments.Length)
{
_currentSegment++;
_frameCount = 0;
}
else
{
if (GestureRecognized != null)
{
GestureRecognized(this, new EventArgs());
Reset();
}
}
}
else if (result == GesturePartResult.Failed || _frameCount == WINDOW_SIZE)
{
Reset();
}
else
{
_frameCount++;
}
}
public void Reset()
{
_currentSegment = 0;
_frameCount = 0;
}
}
}
Using the code
Using the code we created is straightforward. Create an instance of the WaveGesture class inside your program and subscribe to the GestureRecognized event. Remember to call the Update method whenever you have a new Skeleton frame. Here is a complete Console app example:
using Microsoft.Kinect;
using System;
namespace KinectSimpleGesture
{
class Program
{
static WaveGesture _gesture = new WaveGesture();
static void Main(string[] args)
{
var sensor = KinectSensor.KinectSensors.Where(
s => s.Status == KinectStatus.Connected).FirstOrDefault();
if (sensor != null)
{
sensor.SkeletonStream.Enable();
sensor.SkeletonFrameReady += Sensor_SkeletonFrameReady;
_gesture.GestureRecognized += Gesture_GestureRecognized;
sensor.Start();
}
Console.ReadKey();
}
static void Sensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
using (var frame = e.OpenSkeletonFrame())
{
if (frame != null)
{
Skeleton[] skeletons = new Skeleton[frame.SkeletonArrayLength];
frame.CopySkeletonDataTo(skeletons);
if (skeletons.Length > 0)
{
var user = skeletons.Where(
u => u.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault();
if (user != null)
{
_gesture.Update(user);
}
}
}
}
}
static void Gesture_GestureRecognized(object sender, EventArgs e)
{
Console.WriteLine("You just waved!");
}
}
}
That’s it! Now stand in front of your Kinect sensor and wave using your right hand!
Something to note
Obviously, you cannot expect your users to do everything right. One might wave but not perform the entire movement. Another might just perform the movement too quickly or too slowly. When developing a business app targeting the Kinect platform, you have to be aware of all these issues and add conditions to your code. In a common situation, you’ll need to specify whether the user is “almost” performing a gesture. That is, you’ll need to bypass a number of frames before determining the final gesture result. This is why I mentioned the Undetermined statement before.
Vitruvius
So, if you want more production-ready gestures right now, consider downloading Vitruvius. Vitruvius is a free & open-source library I built, which provides many utilities for your Kinect applications. It currently supports 9 gestures, waiting for more to come. The code is more generic and you can easily build your own extensions on top of it. Give it a try, enjoy and even contribute yourself!
[/one_half]
Thank you for this post. I used this approach of gesture segments with great success. Whenever I implement some unique gestures I’ll make sure to contribute them to your projects.
Hi Glenn. Many thanks for your comment. I would be glad to add more contributors to my Kinect projects. Whenever you have something ready, feel free to email me and I will add you as well.
hi,
if i buy xbox one + kinect 2 , all inside the box,
the kinect inside the box can be used with sdk 2
is there difference btw develop and that ?
Hi Desnos. Currently, the XBOX Kinect is not compatible with the SDK. I think it cannot connect to a PC, too. However, the code I provided can be used with a Kinect for XBOX version 1. I’ll keep you up to date in case something changes in the future.
I see a lot of interesting articles on your blog. You have to spend a
lot of time writing, i know how to save you a lot of work,
there is a tool that creates unique, google friendly articles in couple of seconds, just type in google – k2 unlimited content
This is a great tutorial for beginners and I have been able to learn the process of gesture recognition. You mention in your post about an undefined state and this interests me because there is a number of times when the gesture has not been recognised and it gets stuck until the system resets. If the hand is not in the correct position once the reset occurs then you have to wait again which makes for a poor user experience.
Could you shed some light on how to apply an undefined variable as this will turn my application from good to great!
Thanks
Hi Tom. Thank you for your comment! You can read the complete source code about the “undefined” state here: https://github.com/LightBuzz/Vitruvius/tree/master/Kinect%20v2/NET45/LightBuzz.Vitruvius/Gestures
Simply open any of the gesture classes (like this one: https://github.com/LightBuzz/Vitruvius/blob/master/Kinect%20v2/NET45/LightBuzz.Vitruvius/Gestures/SwipeLeft/SwipeLeftSegments.cs) and you’ll see the GesturePartResult.Undetermined enum value.
As you said, this is extremely important for the end-user experience. However, it would be too much for an introductory tutorial. Let me know if you need any further help!
Hey man please guide me about how to detect punch gesture through kinect as my final year project is action based game and i’m facing problem regarding gesture detection. any help would be appreciated. Thanks
Hi Muhammad. That should be fairly easy. You have to check the following things:
* Hands near the chest (SpineShoulder joint)
* Left hand Z value < Right hand Z value (or vice-versa) * Set a threshold of 15 frames
Hi Vangos, How can I implement the grab gesture with Kinect. I am trying to write the code from scratch. Which joint parameters would i need for the same?
Hi Sunil. If you want to detect a closed fist, the SDK has you covered. Simply use the following:
bool isHandClosed = body.HandLeftState == HandState.Closed;
Hi, I am currently doing a powerpoint slideshow slide changer using gestutre, how to do a swipe left / swipe right gesture for user to go on to another slide with pausing in between changing of slides? Much appreciated if you can guide me along. Thanks very much!
Hey there. You can check this GitHub repository I had created using Kinect SDK version 1.8. It will help you get started!
sir can u help me ? how to use the kinect version1 for tracking finger .. thanks .. hope u can help us
Hi Bryan. You can use Candescent NUI for finger tracking.
Hi sir,
I am a beginner on kinect and just downloaded the source code but not able to run it
There are some errors eg. “The type or namespace name Skeleton could not be found”
Can you help me about this?
Hi Jen. I guess you are using Kinect SDK version 2. This code was written for Kinect SDK version 1.8. To run this using Kinect v2, you need to import Microsoft.Kinect reference and replace “Skeleton” with “Body”. You can also check Vitruvius gestures for a more advanced sample.
hi Vangos,
I am trying to do one app which will capture the image of user I want to implement the hand gesture and color both at a time how can I do this since I saw your site Kinect is becoming very interesting I love the way you explained its awesome for fresher’s like me so can you give me some way out for this am using Kinect v2 and am waiting for your book too. Please help me
Hi vinayak. Thank you very much for your comment. I suggest you use Vitruvius to capture the image and recognize the gesture. There is a sample that implements exactly what you need.
thanks Vangos i will try it and am also trying to write the article on NUI so can i refer your Vitruvius it will be amazing
thanks a lot for your feed back
I’m glad I could help you 🙂
hi vangos ,
i wanted to ask if this code would work with Kinect v2 ? and how could i apply that to other gestures for different body parts ?
thanks
Hi Huda. This code applies for Kinect v2, too, with minor modifications (joint names). There is a complete implementation of more gestures on my project Vitruvius. Hope you’ll find it useful. It’s free and open-source.
Hello Vangos Pterneas,
For my Project my Prof asked to assume he is a surgent. He has his Pc (with Pictures or files in it) and a Kinect sensor in
his Office and he wants to controll the pictures of patients through gestures(left, right, click or zoom picture).
My problem is i dont really know where to start or go about it and i dont know what i need. I can programm with java and C#.
Maybe i have to install Visual Studio first. If u could just tell me all what i need and how to start i will be so
greatful. Thanks.
Hi Ekwalla. First of all, install Visual Studio 2013 Express. It’s free. Then, have a look at this sample by Microsoft to get started.
Cheers,
Vangos
i saw ur articles in code project and your blogs..it is very useful to me…thank you very much..i did jump action using
using System.Reactive.Linq but i feel it is very hard to jump and felt tired physically. suggest me any simple code for jump action
easily…tools: .NET, C# (WPF)…thank you very much.
I am doing a project in school using c# 2010 express and sdk 1.8….i try your code but it is not working. I have to create a code that will keystroke F3 key when my left hand and right hand are on the right side. Is there any way i can send you my code so you can help me out? I try everything 🙁 Thanks!
Hi Miguel. The code works using Visual Studio 2013. I have not tested it on 2010.
To achieve what you asked for, you simply need to compare the X position of the hand joints relatively to the chest or spine joint.
Hello Vangos,
I am very new to kinect and as well as to C#, I have downloaded the all four projects of yours from a and then tried to add few new gestures – seating, standing and walking. I have created classes like StandingSengment, SeatingSegment etc. I want to display “Standing” when the user is just standing or “Seated” when the user is the just seating on the UI. But my logic is not working. would be able to help me out? Appreciate your time.
Hi Suman. You should probably check the angles formed between the joints. For example, you can check the angle between the hip, knee, and ankle to understand whether the user is standing or sitting. You can use the code from my project Vitruvius for easy angle calculations.
hi vangos how are you ? hope you doing good i wanted to ask you is there any sample experiment on displaying the images on our face in kinect v2 if you have any or similar one that could be great and when can i except the book online from you i will be waiting for that thanks a lot let me know how to display images on our face thanks
vinayak
Hi Vinayak. You can check this blog post about Kinect Face Basics. I’m publishing another post about Kinect Face HD, too. It should be available in a few hours 🙂
Thanks very Much Vangos for your quick Reply. You also directed to a link which is about applications on Windows 8.1 but i have windows 7. Is that going to be a problem? Also will i have problems if i use Visual Studio 2010 instead of Visual studio 2013 Express. Thanks once more for your reply.
Ekwalla
Hi Ekwalla. Kinect SDK is only compatible with Windows 8, 8.1, and 10. You can’t use it on Windows 7, since there are no Windows 7 drivers available.
Hello Vangos, Thank you for your last Message. I just wish to have a clarification. Microsoft says sdk for windows 7 will work but you disagree. Please clarify me because i wish to try out the tutorial link u sent to me and i got windows 7.
https://www.microsoft.com/en-us/download/details.aspx?id=40278
Hi Ekwalla.
You can use Vitruvius, which supports Kinect 1 and Kinect 2. This will solve your problems.
Hi!
If i have a code for Kinect v1, how hard to port to Kinect v2 is? It involves a lot of changes on the code?
Thanks
Hi Cacho. My company has built Vitruvius. Vitruvius provides you with some Kinect utilities, including gestures. You can check gesture support for both v1 and v2 in the following links:
Gesture for Kinect 1
Gestures for Kinect 2.
If you find Vitruvius helpful, please give it a Star on GitHub 🙂
Hi – Vangos. I came across your article and had a question for you. i hope you don’t mind. I need to have sign language details recognized – specifically finger speed, and amount of movement by fingers, – for my research project. Is it possible to do this using Kinect? And if so, can I have this done without having to write a program? (I am not a programmer).
Appreciate any advice you may be able to provide. Thanks.
Hi Ted,
Sign language recognition would require a lot of programming, especially if you are planning to offer a complete product. You can start by using existing frameworks like Candescent NUI.
Thanks Vangos.
Regards,
Ted
Hello Vangos, i have some code examples that was written with Kinect for Windows SDK v1.6. Can i run this code on visuall Studio that has Kinect for Windows SDK v1.7 installed. Thanks alot in advance for your kind response
You should be able to do so. I do not remember the changelog between those two versions.
Thank you very much Vangos,
i currently work on a gesture recognition project with the Kinect v2 and your post helped me very much.
Even if i use Java with the J4K library your code and your additional explanations got me started in less than 10 min.
Thank you Simon! I’m glad I could help you 🙂
Hello Vangos,
I am totally new to Kinect programming. I have basic concepts of OOP with C# and WPF. I am trying my luck from almost a month now to find some in depth tutorials regarding SDK v2 which help me get started as i am unable to understand these blog posts. Can you please guide me thorugh the steps which i need to get started. How should i approach Kinect programming. I have all the theoretical knowledge regarding Kinect v2 which is required. It will be of great help. Thanks in advance.
Hi Faisal. I believe you should start by reviewing the original Microsoft step-by-step tutorials. Hope that will help you get started!
Hi Vangos,
I’m totally new to kinect programming and I was trying to use your solution to detect if someone is eating/drinking or generally moving his hand to his mouth. I use 3 segments.
1. Segment (right hand side): Y-HandPosition > Y-ElbowPosition and X-HandPosition < X-ElbowPosition
2. Segment: Y-HeadPosition – Y-HandPosition 0.1
Segment 1 and 2 work very fine, but I can’t get a solution to determine if the gesture is finished (Segment 3). Do you have a suggestion how to solve the problem?
Thanks!
Hi Daniel. You should check a pre-determined number of segments, e.g. 12 segments (usually called “window size”).
Thanks!
Hi Vangos,
Any idea how can we create a swipe gesture using Index Finger? (say right hand index finger)
Hi Sanket. You could use the source code of the swipe gesture and use the hand and thumb instead of the hand and elbow.
Nice tutorial, I have to recognise 20 gestures for my system and I have a couple of questions:
1) Will I have to develop code for each gesture separately?
2) VGB uses machine learning, shouldn’t gesture recognition be done through machine learning?
Hi Mughees. Here are my comments to your questions:
1) Yes, as long as the segments represent different positions.
2) You can use whatever is more suitable. Algorithmic gestures (described in this tutorial) rely on the relative positions between the joints. Using VGB would require training. I think VGB is an over-kill for simple Mathematic gestures. It’s powerful for sequences of movements.
I plan on recognizing hand gestures using your finger tracking code, correct me if I am wrong, but I believe I won’t be able to use VGB to train because only the joints supported by the Kinect SDK can be given as in input to the VGB.
You are right. VGB supports only the Kinect SDK classes.
Could you guide me about how the process would go?
1) How would I store the points and use them for training?
2) What methods do people use to train through machine learning?
PS: I Google but I couldn’t find much content regarding this topic.
Have you checked the VGB demo from the SDK Browser? I think this video is what you need:
https://channel9.msdn.com/Blogs/k4wdev/Custom-Gestures-End-to-End-with-Kinect-and-Visual-Gesture-Builder
Hey Vangos,
Currently i am working on a small Kinect app. In this app if i move my arms vertically from my hip till my shoulder, images will move vertically in the UI. I have named this gesture as Vertical swipe Upward and i am using VGB to implement this gesture. I am facing the problem of false positives, so if a user is in front of the sensor and just moves his arms vertically kinect detects it as a gesture. I have tried to implement this gesture with hand close state to avoid false positives but it has no effect. Gesture gets detected even if my hands are open or close. I have tried different techniques to avoid false positives but no use. Can you please guide me what might be the possible solution. I only want a single handed gesture for example with right arm. Thank you.
Hello. You could use algorithmic gestures (like the ones I describe in this article) instead of VGB. Sounds like you do not need that much complexity.
Yes i can use the algorithmic gestures but the problem which i mentioned earlier will remain. How can i possibly differentiate that the user is just lifting his/her arm upwards to perform gesture or just to scratch nose etc. If i just implement segments as you mentioned in your post then it will be difficult to differentiate. For example
Segment1( HandRight.Position.Y > ElbowRight.Position.Y)
These segments will return true even if the user just points at the screen but doesn’t intent to perform any Gesture.
Thank you for your time and help.
In previous comments you mentioned that this can be achieved through pre-determined window size. Can you please elaborate more. Thank you.
Yes. The window size is the number of frames you need to make a decision about a gesture or anything. For example, you may need 10 or 100 frames to understand whether someone is having her hand above the shoulder. This depends on the complexity of what you are trying to achieve. Usually, for such small segments, 10-20 frames are more than enough.
Hi Vangos, excellent tutorial! I have been using the Kinect to map gestures by implementing the gestural logic manually. I have had good results so far but I am having trouble coming up with a means of detecting circular motion ( ie. if right arm is moving in a circular motion…) I was wondering if you could point me toward the best strategy for achieving this? Many thanks.
Hi Syrne. You can check whether the hand is moving as a radius of a circle (with +/- threshold). So, the movement should follow the equation of a circle, with a given threshold (a few centimeters).
HI Vangos,
I am using kinect v1 and sdk v1.8 for my final year project, i want to track only one skeleton at a time. Please guide me how can i do it with simplest code as i am not that much good in kinect coding. Kinect should track the person near to kinect and take gestures, other skeletons behind that nearest skeleton should not be detected. Please provide me a code as soon as possible on email address Thanks.
Hi Saba. Install the Kinect Developer Toolkit and check the Skeleton Basics example. It picks the the closest person.
Hi Vangos,
thank you for your all sharing… it helps me a lot… I am develping an application by using gesture builder… it works for 1 person however I couldnt manage to read 3 body simultaneously… so I construct 3 different readers and 3 different sources, I tried to send my each body consequetively but it always read the last body… do you have any idea? I tried to change body.trackingId it does not work also… Only whey destruct the reader and and build up again when the new bodz tracked but it makes me sooo slow…
Hi Can. You should only use one reader. Use 3 gesture detectors, instead.
Hello Vangos,
I have just come accross with this solution 😉 thanks budy…
You are great!!
have a nice weekend!
Hi, i am currently doing a research about physical therapy for stroke patients using kinect. Can kinect v2 accurately track joints of users with varying body mass indices?
Thank you!
Hi Roq. Kinect is accurate enough, as long as the players wear tight clothes.
Hi again! thanks for answering my previous question!
since my research is about gesture recognition (verifying the range of motion of the specific joint) ,
what algorithm does kinect v2 use? and how is it applied in code?
Thanks!
Hi Roq. Kinect uses depth/image segmentation algorithms to identify the human body joints.
To measure the range of motion, you have to process the raw body data over time.
Thank you ever so for you article.Thanks Again. Really Great. Slappey
Do you think is possible to apply this to gesture like “thumbs up” ?
Hi Dag. Yes, you can definitely check the “thumb-up” gesture. Here are the conditions you’ll need to check:
1) Thumb Y greater than Hand Y.
2) Hand Z less than Spine Z.
3) HandState not equal to Open.
Thank you for this tutorial. It helps me in my project.
But there is no output when I wave my right hand! when I make the segment to be in one position, the code is work.
I don’t know what is the problem. I use visual studio 2015
And can I use this code in Unity5 ??
Hello. Each gesture works for a specific movement direction, so this is normal. You can modify the code to support additional directions.
You can definitely use the code in Unity 5. However, keep in mind that Microsoft’s SDK for Unity has the Left and Right joint names reversed (so, HandLeft would be HandRight, etc.).
are the view and coordinates vary from Kinect to another?
My friend writes the code for implement the elbow flexion and extension. But it did not work on my Kinect!
Hi sm. The Coordinates do not vary. It’s probably an error in the algorithm used.
Hi sir,
I am a beginner on kinect and just downloaded the source code but not able to run it due to some errors:
1. Cannot apply indexing with [] to an expression of type ‘System.Collections.Generic.IReadOnlyDictionary’
2. ‘Microsoft.Kinect.KinectSensor’ does not contain a definition for ‘KinectSensors’
3. The name ‘KinectStatus’ does not exist in the current context
Hello Anki. Are you sure you are using SDK v2?
yeah.. I’m using KinectSDK-v2.0_1409-Setup..
Have you imported the Microsoft.Kinect.dll file to your project?
yes
Use KinectSensor.GetDefault(). For a complete Kinect v2 implementation, use this: https://github.com/LightBuzz/Vitruvius (it’s free and open-source).
I have made changes but the following error is still coming…
‘Microsoft.Kinect.KinectSensor’ does not contain a definition for ‘SkeletonStream’ and no extension method ‘SkeletonStream’ accepting a first argument of type ‘Microsoft.Kinect.KinectSensor’ could be found (are you missing a using directive or an assembly reference?)
Actually I am new to this.. Kindly help.
The code was published with Kinect SDK version 1.8. Check the link I sent you for a version-2 implementation. Vitruvius includes a complete gesture sample.
Hi, Im using Kinect SDK version 1.8, and I wanted to try out your gesture sample codes however I am unable to do so as Im not sure how to add the library into the project (Visual Studio 14) and it when I wanted to add Microsoft Kinect as the reference, I could not find it in the list of references. Can you help me out?
These are the errors that surfaced:
CS0234 C# The type or namespace name “Kinect” does not exist in the namespace (are you missing an assembly reference?)
17 other similar errors.
Hello Zulhakim. Simply download Kinect SDK 1.8 and add the included assemblies.
hello. I’ve managed to fix the problem. Now I want to implement a new gesture of recognising the hands being raised up (Raise Left and Raise Right) and a counter to display the number of times the hands are raised up. how is it done?
Hi!
Thank you for your very useful post and project. I’m using the astra-3D-fotonic camera and try to use your code for gesture/hand recognition. But, your code is not working (some objects are not recognized). I wanted to ask you whether I am making a mistake or it lies on something else.
Thank you,
Hess
Hello Hess. Thank you for your message. This code is only applicable to the Kinect camera. It won’t work with other sensors.
hi.
really thanx for article and project. i used it in my project and it was so cool 🙂
for every one that has problem with the code:
actually the source code for article has an small issue, and when i run it, it doesn’t work and won’t recognize the wave gesture. for every one that has this problem, just replace:
////////////////////
{
return GesturePartResult.Succeeded;
}
////////////////////
with:
/////////////////////
{
return GesturePartResult.Succeeded;
}
return GesturePartResult.Undetermined;
////////////////////
in two segments (you can cntrl+f and search this code in article), and it works.
thanx 🙂
This is 100% true. The complete Vitruvius Gesture Detection code includes what you proposed. Thank you!
Hello 🙂
I use the same code in unity but It not working! the Microsoft.Kinect library is undefined.
Is there another library for unity?
Hello. You can use the official Unity package. The namespace is called “Windows.Kinect” (instead of “Microsoft.Kinect”). Keep in mind that the Left and Right joints are reversed in Unity!
You could also use Vitruvius, which includes built-in Unity support, gestures, avateering, and more.
I change it to “Windows.Kinect” but still undefined.
You need to download the Unity package and include it in your project.
Yes, I import the package from Microsoft website to my project.
Should I use it in my script? if yes, How?
Thank you
If you have imported the package, then use “using Windows.Kinect;” in your scripts.
hi, i would like to know why when i try this project there is no error but it didnt do anything?
Hello. You need to perform the wave gesture like this:
http://simple-ism.net/wp-content/uploads/2014/02/Wave.png
Hi,
I tried installing lightbuzz-vitruvius on Visual Studio 2017 for kinect V1 using the following command :
Install -Package lightbuzz-vitruvius
And got the following errors :
Could not install package. You are trying to install this package into a project that targets .NET framework 4.
Even when I changed the .NET version to 4.5 , I still got the same error
Can you please tell me all the dependencies I need to run the code??
Thank you.
Hello. Vitruvius supports Kinect v2, so, it won’t be compatible with Kinect v1. You can find a legacy version of Vitruvius for Kinect v1 on GitHub.
Hi Vangos Pterneas,
Thanks for these amazing tutorials. Hope you don’t bored to teach us ;)…
The questions:
How to recognize the gestures of WaveRightSegment, WaveLeftSegment, SwipeLeftSegment, SwipeRightSegment etc automatically? How to call these classes in the previous tutorial “Program.cs”? What should we put in the “Program.cs”. Thank you…
Hello, Wan. Thank you for your comment. You can download Vitruvius and check the Gestures sample. The source code is available on GitHub.
void GestureRecognized(object sender, GestureEventArgs e)
{
var gesture = e.GestureType;
switch (gesture)
{
case (GestureType.JoinedHands): break;
case (GestureType.Menu): break;
case (GestureType.SwipeDown): break;
case (GestureType.SwipeLeft): break;
case (GestureType.SwipeRight): break;
case (GestureType.SwipeUp): break;
case (GestureType.WaveLeft): break;
case (GestureType.WaveRight): break;
case (GestureType.ZoomIn): break;
case (GestureType.ZoomOut): break;
}
}
Hi Vangos.
I saw your code on Github about saving joint data in CSV.
public void Update(Body body)
{
if (!IsRecording) return;
if (body == null || !body.IsTracked) return;
string path = Path.Combine(Folder, _current.ToString() + “.line”);
using (StreamWriter writer = new StreamWriter(path))
{
StringBuilder line = new StringBuilder();
if (!_hasEnumeratedJoints)
{
foreach (var joint in body.Joints.Values)
{
line.Append(string.Format(“{0};;;”, joint.JointType.ToString()));
}
line.AppendLine();
foreach (var joint in body.Joints.Values)
{
line.Append(“X;Y;Z;”);
}
line.AppendLine();
_hasEnumeratedJoints = true;
}
foreach (var joint in body.Joints.Values)
{
line.Append(string.Format(“{0};{1};{2};”, joint.Position.X, joint.Position.Y, joint.Position.Z));
}
writer.Write(line);
_current++;
}
}
how is joint data saved in CSV? is it per frame? for example, if the sensor is set to capture 30 frames per second, does each frame contain multiple joint data?
I tried to record/save the joint data for the right shoulder for 18 seconds. 157 rows of joint data were recorded in the CSV file.
How can we save joint data in CSV files such that each frame’s corresponding values are stored?
Thanks!
The CSV data are recorded on a per-frame basis. Each row corresponds to 1 frame. The Update method should be called per frame. If the Body object is null, no rows will be inserted.
Hi again. We tried using the free version of vitruvirus. Does it really lag when body joints are recorded?
Hi Roq. No, there should be no lagging when recording. The Unity recorder is using multiple threads to record everything concurrently without lagging.
Hi Vangos Pterneas,
I am using Kinect 2.0 for doing gesture detection when I cam across your explanation. Thank you for an amazing insight. However I am stuck at this step where I need an action to be generated when I perform a gesture. For example, if i perform the zoom gesture, the website should be zoomed in/out. How do I do this? And also, how do I link a webpage to this so that i can perform gestures to produce results.
Hi Pranav. You can use the event handler to add the corresponding code there. The actual code depends on the Maps SDK you are using.
Hi Vangos,
I’m new in Kinect and I’m trying to write an application with list of gestures which user have to do. When he will do one gesture for example clap your hands 5 times an applicatiom will go to the second gesture etc. I wonder how to make that list because my program detect all gestures at the same time. I was looking for a command or something because I have no idea how to make list of gestures.
Thanks
Hello. You can have a list of the detected gestures and raise your events only when the complete sequence has been detected. Kinect is only giving joint positions and orientations. The way we’ll use this information is up to us. So, simply keep track of what’s has been detected and raise your event when the complete sequence is done.
Hello am working on my final year project that tracks a person and at the same time sends info to a dynamixel to move a light to follow the person. Have you worked on something similar cause am stuck at sending info from the kinect to the dynamixel to turn the light to follow the person.
Hello vangos thanks for the explanation but it did not work for me. I am using a kinect v1.8.
You can check this open-source project.
Glad to read the blog! It was a great tutorial for all the beginner’s learn gesture recognition. I really appreciate the blog. Keep sharing, I Look forward to read more.
Thank you!
Thank you for the nice tutorial. MS has sample data set here for kinect: https://www.microsoft.com/en-us/download/details.aspx?id=52283. looking for steps to import the same in my new kinect project. Should be simple steps.. very new to kinect SDK.
Hello,
I am really grateful for your Unity Kinect library.
I used it in my senior project, and I graduated in 2017
Now, my research is published in the journal
I use this website as the reference
my research
https://www.researchgate.net/publication/332916813_JCAVE_A_3D_Interactive_Game_to_Assist_Home_Physiotherapy_Rehabilitation
and your website in the reference page ( before the last one [29])
First of all, congratulations on your publication! I am really excited to know that my work has helped you with your research and projects. Thank you very much for letting me know!
Hello Vangos,
Thank you so much for putting the effort into this code!
I am working on a project where I am using XBOX 360 V1. I ran the code with no errors or issues. My Kinect Studio identified the code, its files, and its location, and it’s connected. However, I am not seeing any data coming, nor any screen showing a gesture. Does it suppose to show a screen that has the gesture information similar to that in a depth view or a stream of data?
Hello. The application is supposed to display a console message whenever a gesture is detected. You can create the user interface as you wish.
I did run the application on two separate times:
1- The console was displayed but no messages in it even after I did several waves.
2- The second time it gave me the following error: “Visual Studio cannot start debugging because the debug target ‘KinectSimpleGesture/bin/Debug\KinectSimpleGesture.exe’ is missing. Please build the project and retry, or set the OutputPatjh and AssemblyName properties appropriately to point at the correct location for the target assembly”.
What are the reasons for both cases? and by the way, the error I had in the second time occurred with another example. I resat my computer, but still showing the same error.
Hello. It seems that your Visual Studio project is not set up correctly. Try to Clean and Rebuild the project.
Hi, may I ask, is that Kinect xbox360 allow user to create the finger gesture recognition?
Hi Xiao. This is not supported in Kinect 360.
How about when I use the openNI with Kinect SDK v1.8?
Still, it won’t be able to detect the fingers.
Hi,
how can i use this example for Kinect V2?
You’ll need to update the class names to match the ones provided by the Kinect SDK v2.
Hi Vangos, I’m currently working on an open source gesture-based API. If I buy Vitruvious for SDK 2.0, can I use your implementation of gesture detection in the open source project, extend it and more? As I said, is not for commercial purposes.
Hi, i’m new at getting around with Kinect. I have kinect from Xbox 360 with 1.8 SDK. The demo code; I try to run but its not showing me any outputs. There are no build errors or any errors after running it but I’m getting no o/p on my console. Please need help what am I doing wrong and someone could direct me to good links in order to get a clear understanding of kinect gestures. Thanks
P.S: I’m doing a gesture project using Xbox360 kinect.
You can start by adding some logs in the different states and see which ones are reached by your code.