Skip to main content

Gesture recognition is a fundamental element when developing Kinect-based applications (or any other Natural User Interfaces). Gestures are used for navigation, interaction or data input. The most common gesture examples include waving, sweeping, zooming, joining hands, and much more. Unfortunately, the current Kinect for Windows SDK does not include a gesture-detection mechanism out of the box. So, you thought that recognizing gestures using Kinect is a pain in the ass? Not any more. Today I’ll show you how you can implement your own gestures using some really easy techniques. There is no need to be a Math guru or an Artificial Intelligence Yoda to build a simple gesture detection mechanism.

Prerequisites

What is a gesture?

Before implementing something, it is always good to define it. Kinect provides you with the position (X, Y and Z) of the users’ joints 30 times (or frames) per second. If some specific points move to specific relative positions for a given amount of time, then you have a gesture. So, in terms of Kinect, a gesture is the relative position of some joints for a given number of frames. Let’s take the wave gesture as an example. People wave by raising their left or right hand and moving it from side to side. Throughout the gesture, the hand usually remains above the elbow and moves periodically from left to right. Here is a graphical representation of the movement:

Kinect wave gesture

Now that you’ve seen and understood what a gesture is, let’s try to specify its underlying algorithm.

Gesture segments

In the wave gesture, the hand remains above the elbow and moves periodically from left to right. Each position (left / right) is a discrete part of the gesture. Formally, these parts are called segments.

So, the first segment would contain the conditions “hand above elbow” and “hand right of elbow”:

  • Hand.Position.Y > Elbow.Position.Y AND
  • Hand.Position.X > Elbow.Position.X

Similarly, the second segment would contain the conditions “hand above elbow” and “hand left of elbow”:

  • Hand.Position.Y > Elbow.Position.Y AND
  • Hand.Position.X < Elbow.Position.X

That’s it. If you notice any consecutive repeats of the above segments for at least three or four times, then the user is waving! In .NET, the source code would be really simple; just two classes representing each segment. Of course, each segment class should implement an Update method. The Update method determines whether the specified conditions are met for a given skeleton body. Returns Succeeded if every condition of the segment is met, or Failed if none of the conditions is met.


// WaveGestureSegments.cs
using Microsoft.Kinect;
namespace KinectSimpleGesture
{
    public interface IGestureSegment
    {
        GesturePartResult Update(Skeleton skeleton);
    }
    public class WaveSegment1 : IGestureSegment
    {
        public GesturePartResult Update(Skeleton skeleton)
        {
            // Hand above elbow
            if (skeleton.Joints[JointType.HandRight].Position.Y > 
                skeleton.Joints[JointType.ElbowRight].Position.Y)
            {
                // Hand right of elbow
                if (skeleton.Joints[JointType.HandRight].Position.X > 
                    skeleton.Joints[JointType.ElbowRight].Position.X)
                {
                    return GesturePartResult.Succeeded;
                }
            }
            // Hand dropped
            return GesturePartResult.Failed;
        }
    }
    public class WaveSegment2 : IGestureSegment
    {
        public GesturePartResult Update(Skeleton skeleton)
        {
            // Hand above elbow
            if (skeleton.Joints[JointType.HandRight].Position.Y > 
                skeleton.Joints[JointType.ElbowRight].Position.Y)
            {
                // Hand left of elbow
                if (skeleton.Joints[JointType.HandRight].Position.X < 
                    skeleton.Joints[JointType.ElbowRight].Position.X)
                {
                    return GesturePartResult.Succeeded;
                }
            }
            // Hand dropped
            return GesturePartResult.Failed;
        }
    }
}

The GesturePartResult is an enum (we could even use boolean values):


// GesturePartResult.cs
using System;
namespace KinectSimpleGesture
{
    public enum GesturePartResult
    {
        Failed,
        Succeeded
    }
}

Note: For a more advanced example, we could use another GesturePartResult (lets say “Undetermined”), which would indicate that we are not sure about the current gesture result.

Updating the gesture

We now need a way to update and check the gesture every time the sensor provides us with new skeleton/body data. This kind of check will be done in a separate class and will be called 30 times per second, or at least as many times as our Kinect sensor allows. When updating a gesture, we check each segment and specify whether the movement is complete or whether we need to continue asking for data.

Window size

The number of frames we ask for data is called window size and you find it after experimenting with your code. For simple gestures that last for approximately a second, a window size of 30 or 50 will do the job just fine. For the wave gesture, I chose 50.

The gesture class

Having decided on the window size parameter, we can now build the WaveGesture class. Notice the process:

  • In the constructor, we create the gesture parts and we specify their order in the the _segments array. You can use as many occurrences of each segment as you like!
  • In the Update method, we keep track of the frame index and check every segment for success or failure.
  • If we succeed, we throw the GestureRecognized event and reset the gesture
  • If we fail or the window size has been reached, we reset the gesture and start over

Here is the final class for our wave gesture:


// WaveGesture.cs
using Microsoft.Kinect;
using System;
namespace KinectSimpleGesture
{
    public class WaveGesture
    {
        readonly int WINDOW_SIZE = 50;
        IGestureSegment[] _segments;
        int _currentSegment = 0;
        int _frameCount = 0;
        public event EventHandler GestureRecognized;
        public WaveGesture()
        {
            WaveSegment1 waveSegment1 = new WaveSegment1();
            WaveSegment2 waveSegment2 = new WaveSegment2();
            _segments = new IGestureSegment[]
            {
                waveSegment1,
                waveSegment2,
                waveSegment1,
                waveSegment2,
                waveSegment1,
                waveSegment2
            };
        }
        public void Update(Skeleton skeleton)
        {
            GesturePartResult result = _segments[_currentSegment].Update(skeleton);
            if (result == GesturePartResult.Succeeded)
            {
                if (_currentSegment + 1 < _segments.Length)
                {
                    _currentSegment++;
                    _frameCount = 0;
                }
                else
                {
                    if (GestureRecognized != null)
                    {
                        GestureRecognized(this, new EventArgs());
                        Reset();
                    }
                }
            }
            else if (result == GesturePartResult.Failed || _frameCount == WINDOW_SIZE)
            {
                Reset();
            }
            else
            {
                _frameCount++;
            }
        }
        public void Reset()
        {
            _currentSegment = 0;
            _frameCount = 0;
        }
    }
}

Using the code

Using the code we created is straightforward. Create an instance of the WaveGesture class inside your program and subscribe to the GestureRecognized event. Remember to call the Update method whenever you have a new Skeleton frame. Here is a complete Console app example:


using Microsoft.Kinect;
using System;
namespace KinectSimpleGesture
{
    class Program
    {
        static WaveGesture _gesture = new WaveGesture();
        static void Main(string[] args)
        {
            var sensor = KinectSensor.KinectSensors.Where(
                         s => s.Status == KinectStatus.Connected).FirstOrDefault();
            if (sensor != null)
            {
                sensor.SkeletonStream.Enable();
                sensor.SkeletonFrameReady += Sensor_SkeletonFrameReady;
                _gesture.GestureRecognized += Gesture_GestureRecognized;
                sensor.Start();
            }
            Console.ReadKey();
        }
        static void Sensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
        {
            using (var frame = e.OpenSkeletonFrame())
            {
                if (frame != null)
                {
                    Skeleton[] skeletons = new Skeleton[frame.SkeletonArrayLength];
                    frame.CopySkeletonDataTo(skeletons);
                    if (skeletons.Length > 0)
                    {
                        var user = skeletons.Where(
                                   u => u.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault();
                        if (user != null)
                        {
                            _gesture.Update(user);
                        }
                    }
                }
            }
        }
        static void Gesture_GestureRecognized(object sender, EventArgs e)
        {
            Console.WriteLine("You just waved!");
        }
    }
}

That’s it! Now stand in front of your Kinect sensor and wave using your right hand!

Something to note

Obviously, you cannot expect your users to do everything right. One might wave but not perform the entire movement. Another might just perform the movement too quickly or too slowly. When developing a business app targeting the Kinect platform, you have to be aware of all these issues and add conditions to your code. In a common situation, you’ll need to specify whether the user is “almost” performing a gesture. That is, you’ll need to bypass a number of frames before determining the final gesture result. This is why I mentioned the Undetermined statement before.

Vitruvius

So, if you want more production-ready gestures right now, consider downloading Vitruvius. Vitruvius is a free & open-source library I built, which provides many utilities for your Kinect applications. It currently supports 9 gestures, waiting for more to come. The code is more generic and you can easily build your own extensions on top of it. Give it a try, enjoy and even contribute yourself!

[/one_half]
Vangos Pterneas

Vangos Pterneas is a software engineer, book author, and award-winning Microsoft Most Valuable Professional (2014-2019). Since 2012, Vangos has been helping Fortune-500 companies and ambitious startups create demanding motion-tracking applications. He's obsessed with analyzing and modeling every aspect of human motion using AI and Maths. Vangos shares his passion by regularly publishing articles and open-source projects to help and inspire fellow developers.

140 Comments

  • Glenn Geenen says:

    Thank you for this post. I used this approach of gesture segments with great success. Whenever I implement some unique gestures I’ll make sure to contribute them to your projects.

    • Hi Glenn. Many thanks for your comment. I would be glad to add more contributors to my Kinect projects. Whenever you have something ready, feel free to email me and I will add you as well.

  • desnos says:

    hi,
    if i buy xbox one + kinect 2 , all inside the box,

    the kinect inside the box can be used with sdk 2
    is there difference btw develop and that ?

    • Hi Desnos. Currently, the XBOX Kinect is not compatible with the SDK. I think it cannot connect to a PC, too. However, the code I provided can be used with a Kinect for XBOX version 1. I’ll keep you up to date in case something changes in the future.

  • Mack says:

    I see a lot of interesting articles on your blog. You have to spend a
    lot of time writing, i know how to save you a lot of work,
    there is a tool that creates unique, google friendly articles in couple of seconds, just type in google – k2 unlimited content

  • Tom Friday says:

    This is a great tutorial for beginners and I have been able to learn the process of gesture recognition. You mention in your post about an undefined state and this interests me because there is a number of times when the gesture has not been recognised and it gets stuck until the system resets. If the hand is not in the correct position once the reset occurs then you have to wait again which makes for a poor user experience.

    Could you shed some light on how to apply an undefined variable as this will turn my application from good to great!

    Thanks

  • Muhammad Roshaan says:

    Hey man please guide me about how to detect punch gesture through kinect as my final year project is action based game and i’m facing problem regarding gesture detection. any help would be appreciated. Thanks

    • Hi Muhammad. That should be fairly easy. You have to check the following things:
      * Hands near the chest (SpineShoulder joint)
      * Left hand Z value < Right hand Z value (or vice-versa) * Set a threshold of 15 frames

      • Sunil says:

        Hi Vangos, How can I implement the grab gesture with Kinect. I am trying to write the code from scratch. Which joint parameters would i need for the same?

  • RS MSIC 2015 says:

    Hi, I am currently doing a powerpoint slideshow slide changer using gestutre, how to do a swipe left / swipe right gesture for user to go on to another slide with pausing in between changing of slides? Much appreciated if you can guide me along. Thanks very much!

  • Bryan Agdon says:

    sir can u help me ? how to use the kinect version1 for tracking finger .. thanks .. hope u can help us

  • Jen Liu says:

    Hi sir,
    I am a beginner on kinect and just downloaded the source code but not able to run it
    There are some errors eg. “The type or namespace name Skeleton could not be found”
    Can you help me about this?

    • Hi Jen. I guess you are using Kinect SDK version 2. This code was written for Kinect SDK version 1.8. To run this using Kinect v2, you need to import Microsoft.Kinect reference and replace “Skeleton” with “Body”. You can also check Vitruvius gestures for a more advanced sample.

  • vinayak says:

    hi Vangos,
    I am trying to do one app which will capture the image of user I want to implement the hand gesture and color both at a time how can I do this since I saw your site Kinect is becoming very interesting I love the way you explained its awesome for fresher’s like me so can you give me some way out for this am using Kinect v2 and am waiting for your book too. Please help me

  • huda says:

    hi vangos ,
    i wanted to ask if this code would work with Kinect v2 ? and how could i apply that to other gestures for different body parts ?
    thanks

    • Hi Huda. This code applies for Kinect v2, too, with minor modifications (joint names). There is a complete implementation of more gestures on my project Vitruvius. Hope you’ll find it useful. It’s free and open-source.

  • Ekwalla says:

    Hello Vangos Pterneas,
    For my Project my Prof asked to assume he is a surgent. He has his Pc (with Pictures or files in it) and a Kinect sensor in
    his Office and he wants to controll the pictures of patients through gestures(left, right, click or zoom picture).
    My problem is i dont really know where to start or go about it and i dont know what i need. I can programm with java and C#.
    Maybe i have to install Visual Studio first. If u could just tell me all what i need and how to start i will be so
    greatful. Thanks.

  • Arun.s says:

    i saw ur articles in code project and your blogs..it is very useful to me…thank you very much..i did jump action using
    using System.Reactive.Linq but i feel it is very hard to jump and felt tired physically. suggest me any simple code for jump action
    easily…tools: .NET, C# (WPF)…thank you very much.

  • Miguel says:

    I am doing a project in school using c# 2010 express and sdk 1.8….i try your code but it is not working. I have to create a code that will keystroke F3 key when my left hand and right hand are on the right side. Is there any way i can send you my code so you can help me out? I try everything 🙁 Thanks!

    • Hi Miguel. The code works using Visual Studio 2013. I have not tested it on 2010.

      To achieve what you asked for, you simply need to compare the X position of the hand joints relatively to the chest or spine joint.

  • Suman says:

    Hello Vangos,

    I am very new to kinect and as well as to C#, I have downloaded the all four projects of yours from a and then tried to add few new gestures – seating, standing and walking. I have created classes like StandingSengment, SeatingSegment etc. I want to display “Standing” when the user is just standing or “Seated” when the user is the just seating on the UI. But my logic is not working. would be able to help me out? Appreciate your time.

    • Hi Suman. You should probably check the angles formed between the joints. For example, you can check the angle between the hip, knee, and ankle to understand whether the user is standing or sitting. You can use the code from my project Vitruvius for easy angle calculations.

  • vinayak says:

    hi vangos how are you ? hope you doing good i wanted to ask you is there any sample experiment on displaying the images on our face in kinect v2 if you have any or similar one that could be great and when can i except the book online from you i will be waiting for that thanks a lot let me know how to display images on our face thanks
    vinayak

  • Ekwalla says:

    Thanks very Much Vangos for your quick Reply. You also directed to a link which is about applications on Windows 8.1 but i have windows 7. Is that going to be a problem? Also will i have problems if i use Visual Studio 2010 instead of Visual studio 2013 Express. Thanks once more for your reply.
    Ekwalla

  • Ekwalla says:

    Hello Vangos, Thank you for your last Message. I just wish to have a clarification. Microsoft says sdk for windows 7 will work but you disagree. Please clarify me because i wish to try out the tutorial link u sent to me and i got windows 7.
    https://www.microsoft.com/en-us/download/details.aspx?id=40278

  • Cacho says:

    Hi!

    If i have a code for Kinect v1, how hard to port to Kinect v2 is? It involves a lot of changes on the code?

    Thanks

  • Ted Sullivan says:

    Hi – Vangos. I came across your article and had a question for you. i hope you don’t mind. I need to have sign language details recognized – specifically finger speed, and amount of movement by fingers, – for my research project. Is it possible to do this using Kinect? And if so, can I have this done without having to write a program? (I am not a programmer).
    Appreciate any advice you may be able to provide. Thanks.

  • Ekwalla says:

    Hello Vangos, i have some code examples that was written with Kinect for Windows SDK v1.6. Can i run this code on visuall Studio that has Kinect for Windows SDK v1.7 installed. Thanks alot in advance for your kind response

  • Simon says:

    Thank you very much Vangos,
    i currently work on a gesture recognition project with the Kinect v2 and your post helped me very much.
    Even if i use Java with the J4K library your code and your additional explanations got me started in less than 10 min.

  • Faisal Ikhlaq says:

    Hello Vangos,
    I am totally new to Kinect programming. I have basic concepts of OOP with C# and WPF. I am trying my luck from almost a month now to find some in depth tutorials regarding SDK v2 which help me get started as i am unable to understand these blog posts. Can you please guide me thorugh the steps which i need to get started. How should i approach Kinect programming. I have all the theoretical knowledge regarding Kinect v2 which is required. It will be of great help. Thanks in advance.

  • Daniel says:

    Hi Vangos,

    I’m totally new to kinect programming and I was trying to use your solution to detect if someone is eating/drinking or generally moving his hand to his mouth. I use 3 segments.
    1. Segment (right hand side): Y-HandPosition > Y-ElbowPosition and X-HandPosition < X-ElbowPosition
    2. Segment: Y-HeadPosition – Y-HandPosition 0.1

    Segment 1 and 2 work very fine, but I can’t get a solution to determine if the gesture is finished (Segment 3). Do you have a suggestion how to solve the problem?

    Thanks!

  • Sanket says:

    Hi Vangos,
    Any idea how can we create a swipe gesture using Index Finger? (say right hand index finger)

  • Mughees Ismail says:

    Nice tutorial, I have to recognise 20 gestures for my system and I have a couple of questions:
    1) Will I have to develop code for each gesture separately?
    2) VGB uses machine learning, shouldn’t gesture recognition be done through machine learning?

  • FI says:

    Hey Vangos,

    Currently i am working on a small Kinect app. In this app if i move my arms vertically from my hip till my shoulder, images will move vertically in the UI. I have named this gesture as Vertical swipe Upward and i am using VGB to implement this gesture. I am facing the problem of false positives, so if a user is in front of the sensor and just moves his arms vertically kinect detects it as a gesture. I have tried to implement this gesture with hand close state to avoid false positives but it has no effect. Gesture gets detected even if my hands are open or close. I have tried different techniques to avoid false positives but no use. Can you please guide me what might be the possible solution. I only want a single handed gesture for example with right arm. Thank you.

    • Hello. You could use algorithmic gestures (like the ones I describe in this article) instead of VGB. Sounds like you do not need that much complexity.

      • FI says:

        Yes i can use the algorithmic gestures but the problem which i mentioned earlier will remain. How can i possibly differentiate that the user is just lifting his/her arm upwards to perform gesture or just to scratch nose etc. If i just implement segments as you mentioned in your post then it will be difficult to differentiate. For example

        Segment1( HandRight.Position.Y > ElbowRight.Position.Y)

        These segments will return true even if the user just points at the screen but doesn’t intent to perform any Gesture.
        Thank you for your time and help.

        • FI says:

          In previous comments you mentioned that this can be achieved through pre-determined window size. Can you please elaborate more. Thank you.

          • Yes. The window size is the number of frames you need to make a decision about a gesture or anything. For example, you may need 10 or 100 frames to understand whether someone is having her hand above the shoulder. This depends on the complexity of what you are trying to achieve. Usually, for such small segments, 10-20 frames are more than enough.

  • Syrne says:

    Hi Vangos, excellent tutorial! I have been using the Kinect to map gestures by implementing the gestural logic manually. I have had good results so far but I am having trouble coming up with a means of detecting circular motion ( ie. if right arm is moving in a circular motion…) I was wondering if you could point me toward the best strategy for achieving this? Many thanks.

    • Hi Syrne. You can check whether the hand is moving as a radius of a circle (with +/- threshold). So, the movement should follow the equation of a circle, with a given threshold (a few centimeters).

  • saba says:

    HI Vangos,
    I am using kinect v1 and sdk v1.8 for my final year project, i want to track only one skeleton at a time. Please guide me how can i do it with simplest code as i am not that much good in kinect coding. Kinect should track the person near to kinect and take gestures, other skeletons behind that nearest skeleton should not be detected. Please provide me a code as soon as possible on email address Thanks.

  • Can says:

    Hi Vangos,
    thank you for your all sharing… it helps me a lot… I am develping an application by using gesture builder… it works for 1 person however I couldnt manage to read 3 body simultaneously… so I construct 3 different readers and 3 different sources, I tried to send my each body consequetively but it always read the last body… do you have any idea? I tried to change body.trackingId it does not work also… Only whey destruct the reader and and build up again when the new bodz tracked but it makes me sooo slow…

  • Can says:

    Hello Vangos,
    I have just come accross with this solution 😉 thanks budy…
    You are great!!
    have a nice weekend!

  • roq says:

    Hi, i am currently doing a research about physical therapy for stroke patients using kinect. Can kinect v2 accurately track joints of users with varying body mass indices?
    Thank you!

    • Hi Roq. Kinect is accurate enough, as long as the players wear tight clothes.

      • roq says:

        Hi again! thanks for answering my previous question!

        since my research is about gesture recognition (verifying the range of motion of the specific joint) ,
        what algorithm does kinect v2 use? and how is it applied in code?

        Thanks!

        • Hi Roq. Kinect uses depth/image segmentation algorithms to identify the human body joints.

          To measure the range of motion, you have to process the raw body data over time.

  • Thank you ever so for you article.Thanks Again. Really Great. Slappey

  • Dag says:

    Do you think is possible to apply this to gesture like “thumbs up” ?

    • Hi Dag. Yes, you can definitely check the “thumb-up” gesture. Here are the conditions you’ll need to check:

      1) Thumb Y greater than Hand Y.
      2) Hand Z less than Spine Z.
      3) HandState not equal to Open.

  • sm says:

    Thank you for this tutorial. It helps me in my project.
    But there is no output when I wave my right hand! when I make the segment to be in one position, the code is work.
    I don’t know what is the problem. I use visual studio 2015

    And can I use this code in Unity5 ??

    • Hello. Each gesture works for a specific movement direction, so this is normal. You can modify the code to support additional directions.

      You can definitely use the code in Unity 5. However, keep in mind that Microsoft’s SDK for Unity has the Left and Right joint names reversed (so, HandLeft would be HandRight, etc.).

  • sm says:

    are the view and coordinates vary from Kinect to another?
    My friend writes the code for implement the elbow flexion and extension. But it did not work on my Kinect!

  • Anki says:

    Hi sir,
    I am a beginner on kinect and just downloaded the source code but not able to run it due to some errors:
    1. Cannot apply indexing with [] to an expression of type ‘System.Collections.Generic.IReadOnlyDictionary’
    2. ‘Microsoft.Kinect.KinectSensor’ does not contain a definition for ‘KinectSensors’
    3. The name ‘KinectStatus’ does not exist in the current context

  • Anki says:

    yeah.. I’m using KinectSDK-v2.0_1409-Setup..

  • Anki says:

    I have made changes but the following error is still coming…
    ‘Microsoft.Kinect.KinectSensor’ does not contain a definition for ‘SkeletonStream’ and no extension method ‘SkeletonStream’ accepting a first argument of type ‘Microsoft.Kinect.KinectSensor’ could be found (are you missing a using directive or an assembly reference?)
    Actually I am new to this.. Kindly help.

  • Zulhakim says:

    Hi, Im using Kinect SDK version 1.8, and I wanted to try out your gesture sample codes however I am unable to do so as Im not sure how to add the library into the project (Visual Studio 14) and it when I wanted to add Microsoft Kinect as the reference, I could not find it in the list of references. Can you help me out?

    These are the errors that surfaced:

    CS0234 C# The type or namespace name “Kinect” does not exist in the namespace (are you missing an assembly reference?)
    17 other similar errors.

  • Hess says:

    Hi!

    Thank you for your very useful post and project. I’m using the astra-3D-fotonic camera and try to use your code for gesture/hand recognition. But, your code is not working (some objects are not recognized). I wanted to ask you whether I am making a mistake or it lies on something else.

    Thank you,
    Hess

  • yahya says:

    hi.
    really thanx for article and project. i used it in my project and it was so cool 🙂
    for every one that has problem with the code:
    actually the source code for article has an small issue, and when i run it, it doesn’t work and won’t recognize the wave gesture. for every one that has this problem, just replace:
    ////////////////////
    {
    return GesturePartResult.Succeeded;
    }
    ////////////////////
    with:
    /////////////////////
    {
    return GesturePartResult.Succeeded;
    }
    return GesturePartResult.Undetermined;
    ////////////////////
    in two segments (you can cntrl+f and search this code in article), and it works.
    thanx 🙂

  • sm says:

    Hello 🙂

    I use the same code in unity but It not working! the Microsoft.Kinect library is undefined.

    Is there another library for unity?

  • wan says:

    hi, i would like to know why when i try this project there is no error but it didnt do anything?

  • Manasa says:

    Hi,
    I tried installing lightbuzz-vitruvius on Visual Studio 2017 for kinect V1 using the following command :
    Install -Package lightbuzz-vitruvius

    And got the following errors :
    Could not install package. You are trying to install this package into a project that targets .NET framework 4.

    Even when I changed the .NET version to 4.5 , I still got the same error
    Can you please tell me all the dependencies I need to run the code??
    Thank you.

  • Wan Mohd Rizhan says:

    Hi Vangos Pterneas,

    Thanks for these amazing tutorials. Hope you don’t bored to teach us ;)…
    The questions:
    How to recognize the gestures of WaveRightSegment, WaveLeftSegment, SwipeLeftSegment, SwipeRightSegment etc automatically? How to call these classes in the previous tutorial “Program.cs”? What should we put in the “Program.cs”. Thank you…

    • Hello, Wan. Thank you for your comment. You can download Vitruvius and check the Gestures sample. The source code is available on GitHub.


      void GestureRecognized(object sender, GestureEventArgs e)
      {
      var gesture = e.GestureType;

      switch (gesture)
      {
      case (GestureType.JoinedHands): break;
      case (GestureType.Menu): break;
      case (GestureType.SwipeDown): break;
      case (GestureType.SwipeLeft): break;
      case (GestureType.SwipeRight): break;
      case (GestureType.SwipeUp): break;
      case (GestureType.WaveLeft): break;
      case (GestureType.WaveRight): break;
      case (GestureType.ZoomIn): break;
      case (GestureType.ZoomOut): break;
      }
      }

  • Roq says:

    Hi Vangos.

    I saw your code on Github about saving joint data in CSV.

    public void Update(Body body)
    {
    if (!IsRecording) return;
    if (body == null || !body.IsTracked) return;

    string path = Path.Combine(Folder, _current.ToString() + “.line”);

    using (StreamWriter writer = new StreamWriter(path))
    {
    StringBuilder line = new StringBuilder();

    if (!_hasEnumeratedJoints)
    {
    foreach (var joint in body.Joints.Values)
    {
    line.Append(string.Format(“{0};;;”, joint.JointType.ToString()));
    }
    line.AppendLine();

    foreach (var joint in body.Joints.Values)
    {
    line.Append(“X;Y;Z;”);
    }
    line.AppendLine();

    _hasEnumeratedJoints = true;
    }

    foreach (var joint in body.Joints.Values)
    {
    line.Append(string.Format(“{0};{1};{2};”, joint.Position.X, joint.Position.Y, joint.Position.Z));
    }

    writer.Write(line);

    _current++;
    }
    }

    how is joint data saved in CSV? is it per frame? for example, if the sensor is set to capture 30 frames per second, does each frame contain multiple joint data?

    I tried to record/save the joint data for the right shoulder for 18 seconds. 157 rows of joint data were recorded in the CSV file.

    How can we save joint data in CSV files such that each frame’s corresponding values are stored?

    Thanks!

    • The CSV data are recorded on a per-frame basis. Each row corresponds to 1 frame. The Update method should be called per frame. If the Body object is null, no rows will be inserted.

  • Roq says:

    Hi again. We tried using the free version of vitruvirus. Does it really lag when body joints are recorded?

  • Pranav says:

    Hi Vangos Pterneas,

    I am using Kinect 2.0 for doing gesture detection when I cam across your explanation. Thank you for an amazing insight. However I am stuck at this step where I need an action to be generated when I perform a gesture. For example, if i perform the zoom gesture, the website should be zoomed in/out. How do I do this? And also, how do I link a webpage to this so that i can perform gestures to produce results.

  • Agnieszka says:

    Hi Vangos,

    I’m new in Kinect and I’m trying to write an application with list of gestures which user have to do. When he will do one gesture for example clap your hands 5 times an applicatiom will go to the second gesture etc. I wonder how to make that list because my program detect all gestures at the same time. I was looking for a command or something because I have no idea how to make list of gestures.

    Thanks

    • Hello. You can have a list of the detected gestures and raise your events only when the complete sequence has been detected. Kinect is only giving joint positions and orientations. The way we’ll use this information is up to us. So, simply keep track of what’s has been detected and raise your event when the complete sequence is done.

  • Chuks says:

    Hello am working on my final year project that tracks a person and at the same time sends info to a dynamixel to move a light to follow the person. Have you worked on something similar cause am stuck at sending info from the kinect to the dynamixel to turn the light to follow the person.

  • Tega Chuks says:

    Hello vangos thanks for the explanation but it did not work for me. I am using a kinect v1.8.

  • JacobJones says:

    Glad to read the blog! It was a great tutorial for all the beginner’s learn gesture recognition. I really appreciate the blog. Keep sharing, I Look forward to read more.

  • andrew michael says:

    Thank you for the nice tutorial. MS has sample data set here for kinect: https://www.microsoft.com/en-us/download/details.aspx?id=52283. looking for steps to import the same in my new kinect project. Should be simple steps.. very new to kinect SDK.

  • sm says:

    Hello,
    I am really grateful for your Unity Kinect library.
    I used it in my senior project, and I graduated in 2017
    Now, my research is published in the journal
    I use this website as the reference

    my research
    https://www.researchgate.net/publication/332916813_JCAVE_A_3D_Interactive_Game_to_Assist_Home_Physiotherapy_Rehabilitation

    and your website in the reference page ( before the last one [29])

    • First of all, congratulations on your publication! I am really excited to know that my work has helped you with your research and projects. Thank you very much for letting me know!

  • Batool says:

    Hello Vangos,

    Thank you so much for putting the effort into this code!
    I am working on a project where I am using XBOX 360 V1. I ran the code with no errors or issues. My Kinect Studio identified the code, its files, and its location, and it’s connected. However, I am not seeing any data coming, nor any screen showing a gesture. Does it suppose to show a screen that has the gesture information similar to that in a depth view or a stream of data?

    • Hello. The application is supposed to display a console message whenever a gesture is detected. You can create the user interface as you wish.

      • Batool says:

        I did run the application on two separate times:
        1- The console was displayed but no messages in it even after I did several waves.
        2- The second time it gave me the following error: “Visual Studio cannot start debugging because the debug target ‘KinectSimpleGesture/bin/Debug\KinectSimpleGesture.exe’ is missing. Please build the project and retry, or set the OutputPatjh and AssemblyName properties appropriately to point at the correct location for the target assembly”.
        What are the reasons for both cases? and by the way, the error I had in the second time occurred with another example. I resat my computer, but still showing the same error.

  • Xiao Yunn says:

    Hi, may I ask, is that Kinect xbox360 allow user to create the finger gesture recognition?

  • Ian says:

    Hi,
    how can i use this example for Kinect V2?

  • Hi Vangos, I’m currently working on an open source gesture-based API. If I buy Vitruvious for SDK 2.0, can I use your implementation of gesture detection in the open source project, extend it and more? As I said, is not for commercial purposes.

  • Usmaneeyy says:

    Hi, i’m new at getting around with Kinect. I have kinect from Xbox 360 with 1.8 SDK. The demo code; I try to run but its not showing me any outputs. There are no build errors or any errors after running it but I’m getting no o/p on my console. Please need help what am I doing wrong and someone could direct me to good links in order to get a clear understanding of kinect gestures. Thanks
    P.S: I’m doing a gesture project using Xbox360 kinect.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.