Skip to main content

NOTE: This is preliminary software and/or hardware and APIs are preliminary and subject to change.

In my previous blog post, I show you how to display the color, depth and infrared streams of Kinect version 2 by transforming the raw binary data into Windows bitmaps.

This time, we’ll dive into the most essential part of Kinect: Body tracking.

The initial version of Kinect allowed us to track up to 20 body joints. The second version allows up to 25 joints. The new joints include the fists and thumbs! Moreover, due to the enhanced depth sensor, the tracking accuracy has been significantly improved. Experienced users will notice less jittering and much better stability. Once again, I would like to remind you of my video, which demonstrates the new body tracking capabilities:

[youtube code=”GZ3eYyBPv44″]

Next, we are going to implement body tracking and display all of the new joints on-screen. We’ll extend the project we created previously. You can download the source code here.

[Tweet “Complete body tracking using Kinect”]

Requirements

Extending the project

In the previous blog post, we created a project with an <Image> element for displaying the streams. We now need to add a <Canvas> control for drawing the body. Here is the updated XAML code:


<Grid>
    <Image Name="camera" />
    <Canvas Name="canvas" />
</Grid>

We also added a reference to Microsoft.Kinect namespace and initialized the sensor:


// Kinect namespace
using Microsoft.Kinect;
// ...
// Kinect sensor and Kinect stream reader objects
KinectSensor _sensor;
MultiSourceFrameReader _reader;
IList<Body> _bodies;
// Kinect sensor initialization
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
    _sensor.Open();
}

We also added a list of bodies, where all of the body/skeleton related data will be saved. If you have developed for Kinect version 1, you notice that the Skeleton class has been replaced by the Body class.

Remember the MultiSourceFrameReader? This class gives us access on every stream, including the body stream! We simply need to let the sensor know that we need body tracking functionality by adding an additional parameter when initializing the reader:


_reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color |
                                             FrameSourceTypes.Depth |
                                             FrameSourceTypes.Infrared |
                                             FrameSourceTypes.Body);
_reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;

The Reader_MultiSourceFrameArrived method will be called whenever a new frame is available. Let’s specify what will happen in terms of the body data:

  1. Get a reference to the body frame
  2. Check whether the body frame is null – this is crucial
  3. Initialize the _bodies list
  4. Call the GetAndRefreshBodyData method, so as to copy the body data into the list
  5. Loop through the list of bodies and do awesome stuff!

Always remember to check for null values. Kinect provides you with approximately 30 frames per second – anything could be null or missing! Here is the code so far:


void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
{
    var reference = e.FrameReference.AcquireFrame();
    // Color
    // ...
    // Depth
    // ...
    // Infrared
    // ...
    // Body
    using (var frame = reference.BodyFrameReference.AcquireFrame())
    {
        if (frame != null)
        {
            _bodies = new Body[frame.BodyFrameSource.BodyCount];
            frame.GetAndRefreshBodyData(_bodies);
            foreach (var body in _bodies)
            {
                if (body != null)
                {
                    // Do something with the body...
                }
            }
        }
    }
}

This is it! We now have access to the bodies Kinect identifies. Next step is to display the skeleton information on-screen. Each body consists of 25 joints. The sensor provides us with the position (X, Y, Z) and the rotation information for each one of them. Moreover, Kinect lets us know whether the joints are tracked, hypothsized or not tracked. It’s a good practice to check whether a body is tracked before performing any critical functions. The following code illustrates how we can access the different body joints:


if (body != null)
{
    if (body.IsTracked)
    {
        Joint head = body.Joints[JointType.Head];
        
        float x = head.Position.X;
        float y = head.Position.Y;
        float z = head.Position.Z;
        // Draw the joints...
    }
}

The supported joints by Kinect 2 are the following:

  • SpineBase
  • SpineMid
  • Neck
  • Head
  • ShoulderLeft
  • ElbowLeft
  • WristLeft
  • HandLeft
  • ShoulderRight
  • ElbowRight
  • WristRight
  • HandRight
  • HipLeft
  • KneeLeft
  • AnkleLeft
  • FootLeft
  • HipRight
  • KneeRight
  • AnkleRight
  • FootRight
  • SpineShoulder
  • HandTipLeft
  • ThumbLeft
  • HandTipRight
  • ThumbRight

Thumbs and hand tips are new joints added in the second version of Kinect. The following joints have been renamed:

  • ShoulderCenter โ†’ Neck
  • Spine โ†’ SpineMid
  • HipCenter โ†’ SpineBase

Knowing the coordinates of every joint, we can now draw some objects using XAML and C#. However, Kinect provides a distance in millimetres, so we need to map the millimetres to screen pixels. In the attached project, I have made this mapping for you. So, the only method you need to call is the DrawPoint or the DrawLine. Here is the DrawPoint:


public static void DrawPoint(this Canvas canvas, Joint joint)
{
    // 1) Check whether the joint is tracked.
    if (joint.TrackingState == TrackingState.NotTracked) return;
    // 2) Map the real-world coordinates to screen pixels.
    joint = joint.ScaleTo(canvas.ActualWidth, canvas.ActualHeight);
    // 3) Create a WPF ellipse.
    Ellipse ellipse = new Ellipse
    {
        Width = 20,
        Height = 20,
        Fill = new SolidColorBrush(Colors.LightBlue)
    };
    // 4) Position the ellipse according to the joint's coordinates.
    Canvas.SetLeft(ellipse, joint.Position.X - ellipse.Width / 2);
    Canvas.SetTop(ellipse, joint.Position.Y - ellipse.Height / 2);
    // 5) Add the ellipse to the canvas.
    canvas.Children.Add(ellipse);
}

Similarly, you can draw lines using the Line object. Download the sample project and check by yourself.

Here is the end result you saw in the video:

Kinect 2 body stream

Notice that the body joints are not perfectly aligned to the background image. Why? Because the color, infrared and depth sensors are not one above the other, so they have a slightly different point of view. You can use the coordinate mapper of the SDK and align them if necessary.

Download the source code

Body tracking was similarly done in the previous sensor. In the next blog post, we are going to see something totally new: facial expressions and hand states.

PS: Vitruvius

If you enjoyed this article, then you’ll love Vitruvius. Vitruvius is a set of powerful Kinect extensions that will help you build stunning Kinect apps in minutes. Vitruvius includes avateering, HD Face, background removal, angle calculations, and more. Check it now.

Want to hire my team for your next cutting-edge app? Drop me an email.

Vangos Pterneas

Vangos Pterneas is a software engineer, book author, and award-winning Microsoft Most Valuable Professional (2014-2019). Since 2012, Vangos has been helping Fortune-500 companies and ambitious startups create demanding motion-tracking applications. He's obsessed with analyzing and modeling every aspect of human motion using AI and Maths. Vangos shares his passion by regularly publishing articles and open-source projects to help and inspire fellow developers.

164 Comments

  • Julian says:

    Thank you for writing these easy to follow and informative posts.
    I am much looking forward to reading your book.
    Do you have any timeframe on when it will be published?

    • Hi Julian. Really thank you for your comment. The book will be published during spring. This book is not just a list of my thoughts and tutorials. I ask my subscribers for feedback before I complete the whole book. As a result, between January and March, I have been given great ideas and added two additional chapters that most of my readers wanted! I’ll keep you informed about it!

    • Ozzy says:

      Hi Vangos.

      At first thanks for your helpful tutorial ๐Ÿ™‚

      For my project i have to capture the legs of the body tracking, while a person is sitting. Especially the knees and feet. Iโ€™ve made it so far to draw them.
      BUT if I put the Kinect under a table and the person is sitting in front of the table, the choosen joints start to flickering. I found out that the Kinect always tries to capture the whole body of the person. But the table prevents the Kinect of capturing the torso and the rest of the body. So my choosen joints for the legs can not be tracked correct.

      My question is, how can I modify the BodyFramereader to read or catupture only the legs, instead of the whole body? I am not talking about to draw my choosen joints. The BodyTracker/BodyFrameReader shall only capture my joints.

      Can you help and give me a code example? Thanks a lot

      • Hi Ozzy. You cannot modify such settings. Kinect is configured to measure the joint coordinates based on the SpineBase joint. As a result, you always need to track that joint.

  • Armando says:

    Hi, I would like to see comparisons between the new sensor and the old one, could you record joint positions for hands and legs at the same time with both the old and the new sensor with no smoothing applied and post that data to see if there are noticeable differences?

  • Inthiyaz says:

    Hi,

    Your KinectStream is excellent but I am unable to get the reference of MultiSourceFrameReader which is not recognized with the reference of Microsoft.Kinect.dll v1.8
    My environment:
    Windows 7
    Kinect sdk 1.8
    VS2012

    Please help me why the MultiSourceFrameReader class is not recognized in the source code..

    • Hi Inthiyaz. This code is for the upcoming Kinect version 2. Version 1.8 does not use MultiSourceFrameReader.

      You can check my project Vitruvius for a similar example for version 1.8: https://github.com/LightBuzz/Vitruvius/blob/master/Kinect%20v1/VitruviusTest/MainWindow.xaml.cs

      • Inthiyaz says:

        Thanks for the reply but is there any beta release from microsoft for kinect2? I am not getting kinect2, pls let me know.

      • Jony Yang says:

        Found a bug from your Vitruvious Github codes – in your MainWindow you render RenderTargetBitmap objects to a Canvas object without release the taken memory, so within seconds the application consumes all the physical memory. The detail place of this bug is in MainWindow.Xaml.cs, line 127.

        I use the following quick fix to get rid of the issue.

        if (body.IsTracked)
        {
        // Update body gestures.
        _gestureController.Update(body);

        // Draw body.
        // JONY – body.ToBitmap has serious memory leak issue and will cause the demo collapsed in seconds
        // when test stand in front of Kinect sensor.
        //canvas.Source = body.ToBitmap(VisualizationMode.Color);

        RenderTargetBitmap source = (RenderTargetBitmap)body.ToBitmap(VisualizationMode.Color);
        canvas.Source = source;
        source.Dispose();
        }

        • Jony Yang says:

          Realized I had pasted wrong version:) The right one is below

          RenderTargetBitmap source = (RenderTargetBitmap)body.ToBitmap(VisualizationMode.Color);
          canvas.Source = source;
          if (source != null) source.Clear();
          source = null;
          GC.Collect();
          GC.WaitForPendingFinalizers();

          • Hi Jony. Thanks for noting this – we’ll fix it as soon as possible. Vitruvius is still under development and will be finalized by the official release of Kinect SDK v2. We are planning to add much more features, such as 3D avatar animation, face tracking, etc. Your feedback is really valuable.

            Thank you!

  • keeth says:

    I wonder if any filtering is possible prior to skeleton inference. The previous kinect skeletonizer did not work properly when the person was positioned against a background (as in “making contact with the background”). I suppose the skeletonization is done on the internal sensor processing board directly, I am afraid this precludes filtering stuff out of the point cloud that messes up skeleton tracking. Do you have any experiences with/ideas on this?

    • Hi Keeth. It is hard to recognize a body if it positioned right against a surface or above a floor for example. In v1 sensor, the results were poor. Trying the same thing with a v2 sensor, I only noticed some increased jittering, though body tracking performed much more accurately than v1.

  • michele says:

    Hi.
    I’m trying to project the skeleton tracked by a kinect v2 on my body. I did a simple “setting” procedure that let me add an offsetX, offsetY, offsetZ to the tracked value, so that i can map the points on the exact part of my body. This works; but when i move forward the sensor, or i move far from it (this way affecting the Z value), the mapping doesn’t work anymore.
    Is SDK calibration procedure a simple “translation” function, so similar to the one i visually created to calculate the offset, or is there a ratio each value should be multiplied to transform values from one system to the other?
    Do you think this is the problem, or is there some other thing i’m not considering (projector-kinect calibration ?)
    Is there a way to calibrate a projector with a kinect?
    regards
    michele

  • Inass says:

    Hello, i am new with kinect. I am trying to develop an app where i can track the body motion and recognize the gestures, and for each gesture retrieve a value in the database. don’t know where to start.
    with kinect sdk 2 there’s something called “Visual gesture builder”, how can i use it? Thank You

  • AD says:

    First of all, thanks for your tutorials and code sharing.
    I am new(-ish) to C++ and no experience with Kinect (I normally code in Matlab – but no viable link with Kinect at the moment). I am trying to use snippets of your code adapted to C++. So far, I managed to get it working up to the
    if (body.IsTracked)
    {

    Then I siply output TrackingID to console for testing. This works fine.
    UINT64 trackingID;
    hr = m_bodies[bodyId]->get_TrackingId(&trackingID);
    std::cout << "BodyId: " << bodyId << std::endl;
    I haven't been able to find the equivalent (data type Joint) to your next bit:
    Joint head = body.Joints[JointType.Head];

    float x = head.Position.X;
    float y = head.Position.Y;
    float z = head.Position.Z;

    I also don't know how to implement the drawPoint in C++.
    Any help or suggestions would be greatly appreciated.
    Cheers,

  • mohamad reza kharazi says:

    Error 1 ‘Microsoft.Kinect.KinectSensor’ does not contain a definition for ‘Default’ C:\Users\mikey\Desktop\KinectStreams\KinectStreams\MainWindow.xaml.cs 51 36 KinectStreams
    Error 2 ‘Microsoft.Kinect.Body’ does not contain a definition for ‘Dispose’ and no extension method ‘Dispose’ accepting a first argument of type ‘Microsoft.Kinect.Body’ could be found (are you missing a using directive or an assembly reference?) C:\Users\mikey\Desktop\KinectStreams\KinectStreams\MainWindow.xaml.cs 75 30 KinectStreams

    Hi what should i do with these two errors , a ran your source code

    • Hi Mohamad. This blog post was written for a previous version of the SDK. Simply replace KinectSensor.Default with KinectSensor.GetDefault() and you’ll be fine. Also, remove body.Dispose(). Thanks for noting this.

  • mohamad reza kharazi says:

    Thank you , now it gives me this error , how to fix this one

    an unhandled exception of type ‘system.componentmodel.win32exception’ occurred in system.dll

  • mohamad reza kharazi says:

    its general error , its not even appear in the error list . no specific line have shown

  • mohamad reza kharazi says:

    sorry i found it , silly mistake

    Process.Start(@”C:\Windows\System32\KinectService.exe”);
    there is no such file in this directory

  • mohamad reza kharazi says:

    it rans , but how i can export all the joint coordination (X,Y,Z) into a excel file or notepad ?

  • Shawn says:

    i have a problem with this line of code ( frame.GetAndRefreshBodyData(_bodies); )

    the error states :
    Error 1 The best overloaded method match for ‘Microsoft.Kinect.BodyFrame.GetAndRefreshBodyData(System.Collections.Generic.IList)’ has some invalid arguments C:\Users\rainshawn\Desktop\Kinect program\Prototype_A\Prototype_A\Form1.cs 62 21 Prototype_A
    Error 2 Argument 1: cannot convert from ‘System.Collections.IList’ to ‘System.Collections.Generic.IList’ C:\Users\rainshawn\Desktop\Kinect program\Prototype_A\Prototype_A\Form1.cs 62 49 Prototype_A

    • Hey Shawn. This code was written using the first beta version of Kinect v2. I am currently updating all of the repositories, so you’ll have the new code soon. It will be compatible with the existing public SDK ๐Ÿ™‚

      Regarding the error you encounter, simply type “IList _bodies” or “Body[] _bodies”.

  • Athiban says:

    hi Vangos Pterneas, i can get the blank output window with four buttons. . there is no change when the button is clicked .

  • Dear Vangos,
    I have a question regarding the limits and methods of building baby skeleton for Kinect v2.
    I would be interested in using Kinect v2 for analysing motions of young infants (newborn to 3 months). This is part of our attempts to improve neurological assessment. In this setting, the baby is lying supine (back down) on a mattress and the camera can be positioned on top of the baby so that it sees all limbs and fingers at all times.
    In this setting, it would be desirable that Kinect is able to track limbs and fingers (ie grasping movements).

    I have understood that the current skeleton for adults is not able to do that, but I should plan to build a modified skeleton with the suitable dimensions and ratios.

    QUESTION:
    Do you know if this is doable? If yes, could you possible give some pointers where to find ways to modify the skeleton in v2?

    Best regards,
    Sampsa (Helsinki, Finland)

    • Hi Sampsa. Here are my thoughts regarding your concept:

      • * Using Kinect, you can track the 25 joints described in the article.
      • * You can track infants.
      • * If the infant is lying on a bed, the accuracy may not be as high as in the ordinary standing body tracking.
      • * You can detect the thumbs and the hand tips, but you won’t be able to accurately track all of the fingers.
      • * You could create your own skeleton wrapper and process the depth stream directly. Check Candescent NUI, which does something similar.

      Cheers,
      Vangos

      • Hi Vangos,
        thanks!
        The link looks very interesting.
        Just double checking (re what I read in the link):
        Is it possible to have kinect v2 and use OpenNI?
        Our engineers in the research network were informed that this would not be possible, so they have been only using kinect v1 in their OpenNI-based work.
        Best,
        Sampsa

        • Hi Sampsa. Candescent NUI target Kinect version 1. It supports both OpenNI and Microsoft SDK 1.8. You’ll need to port it to SDK 2 to work appropriately. It shouldn’t be a tough task. OpenNI cannot be used with Kinect v2 (yet).

          If you are a fun of OpenNI, keep an eye on this page.

          Cheers,
          Vangos

  • Achal Prajapati says:

    Hi..Vangos Pterneas

    How to create or save video for particular path in kinect v2 using WPF application
    Thanks,
    Achal

  • Hung says:

    Dear Vangos,

    I have a question about the SDK code detect, my first test apply the SDK 2.0 for kinect v2 but my example code is SDK 1.8 version download from internet , old version caused the visual studio detect error language.

    May I know how to change these code for SDK 2.0 ( Skeleton , SkeletonFrame , SkeletonStream , JointTrackingState , DepthStream , DepthImageFormat)

    Best regards,
    Hung

    • Hello Hung. Kinect SDK v1 and v2 are incompatible. There are differences regarding the joint names, frame names, constants and enumerations. As a result, code written in v1 is not directly compatible with v2. You can check my project Vitruvius to see examples written in both v1 and v2.

      Hope this will help you.

      Regards,
      Vangos

  • Mazen says:

    Dear Pterneas

    Thank you very much for your efforts in Kinect….
    I successfully built a classifier for gestures using machine intelligence. In the next step, I need my software to select one player among others and takes the orders from him soon after he raises his hand and will ignore the other players. I tried to use body ID, but I still failed in this. Any idea to solve this problem?

    Thank you very much

  • Zeinab says:

    hi
    thanks for the great post.
    is there any way I can make this code get its data from a recorded file (.xef file) instead of the kinect camera?

  • Alok Agnihotri says:

    pls provide me power point using with kinect sdk v2 version
    thanks

  • Sahba says:

    Hi Vangos,

    I am using Kinect for Windows V2. For my project I need to get the joint angles (absolute or relative) of different joints and make an avatar in the VR to mimic my motion. I have been searching the net for a while but could not find any good resources. Could you please help me on this? Thanks man!

  • April Rose says:

    Hi Vangos! Thanks for your great posts and responses you really helped many people :))
    I am currently working on my project which is finger tracking in kinect SDK 1.7 and I would like to ask if it is possible to make a virtual keyboard… We would like to use kinect together with a projector…if the projector project the keyboard on a flat surface, is it possible to click or press the keys using the finger? If it possible let me know and Kindly give me tips on how to accomplish it.. THANK YOU!!!

    -April Rose

  • Wassim says:

    Hi Vangos,

    Do you know which algo is used in kinect v2 for joint localization (pose estimation)? Is it that of Shotton et al. :

    Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., … & Moore, R. (2013). Real-time human pose recognition in parts from single depth images. Communications of the ACM, 56(1), 116-124.

    • Hi Wassim. Not everything is public, so I am not aware about the SDK internals. Image segmentation and depth processing have been covered deeply in multiple academic papers during the past 10 years (maybe more). Kinect algorithms were originally developed by Microsoft Research. The sensor hardware was built by PrimeSense. For version 2.0, everything was done by Microsoft. That’s all I have known.

  • Andrew says:

    Hey,

    Is there a built in method to extract data on joint angle and positions from the kinect v2? I’m not adept at coding and I’m having trouble following along to set up that type of application. Do you have any advice? My project is mainly concerned with analyzing the data.

    Thanks!

  • Diana says:

    Hi Vangos.

    I am working on a project in which I have to detect and track entire body in a rectangular frame (just like our mobile cameras that track our face in a rectangular frame moving with our motion). How can I create that rectangular frame and track full body? Please help.

    • Hi Diana. Using XAML, you can create a Rectangle object and place it within a Canvas. The rectangle’s height will be the distance between the feet and the head. The rectangle’s width will be the distance between the shoulders or hands.

      Remember to use CoordinateMapper to map between world and screen coordinates.

      Let me know if this information helped you.

  • Himaanshu says:

    Hi

    Is it possible to track finger 3-d coordinates with kinect v2? Are there available sdk for this operation?
    Please comment the links in the reply.

    Thanks,
    Best

  • ten_six says:

    Hi,
    i was wondering if you knew of a way to find the ‘true’ distance between the left and right eye? I’m assuming there has to be some sort of head rotation included in the calculation. Basically, I want to get as detailed as possible with the distance between facial points in 3d space and not just the XY position of them on the screen. Thanks for helping, advance!

  • sush says:

    Hey,
    Could u tell me how to rotate an image in wpf w.r.t a point. I dont know exactly how to use that rotate transform function.

    Thanks

    • Hi Sush. You can use a RotateTransform.

      In XAML:


      <Image Name="image" RenderTransformOrigin="0.5,0.5">
      <Image.RenderTransform>
      <RotateTransform Angle="25.0" />
      </Image.RenderTransform>
      </Image>

      In C#:

      (image.RenderTransform as RotateTransform).Angle = 25.0;

  • Sush says:

    Thank you so much Vangos.

  • Ashar Ali says:

    Hey Vangos,
    You have been publishing nice blog posts. Do you have an idea what is the frame rate of acquiring body frame by the kinect sensors?
    I am not able to find in the microsoft documentations of BodyFrameReader.
    Thanks.

  • Fan Chenxiao says:

    hi Mr.Vangos,
    im new to kinect, how can i export the body coordinate data (or for a particular joint)every certain time to the exel or txt files?do you have code including this function? i would appriciate it very much if you can give me an answer ๏ผŒthanks.

  • hanan says:

    Hello,

    I have a question about objects tracking ..is there a function in SDK2 can I use for object detection and tracking?

  • hanan says:

    what is the difference between

    Body.joint.handright

    Body.jointorientation.handright
    ?

  • Hanan says:

    Mr.Pterneas

    I so thankful for your cooperation I appreciate it

    I have a question about joint orientation

    How I can detect or track for example the orientation and rotation of the wrist (hand) – or the arm (elbow)?

    • Hi Hanan. You need to convert the orientation (Vector4) to the corresponding rotation in the 3 axis (Vector3).

      You can use this Euler transform. It returns a Vector4 with W set to 0. So, X represents the rotation in the X-axis, Y represents the rotation in the Y-axis, and Z represents the rotation in the Z-axis.

  • Hanan says:

    I am sorry for bothering you but I have tried to understand the code of Euler transform but i could not
    can you provide me any tutorials or any website that can help me to understand the general theory and idea

    Thank you so much Mr.Pterneas

  • Hanan says:

    I am sorry for bother you but I have tried to understand the code of Euler transform but i could not
    can you provide me any tutorials or any website that can help me to understand the general theory and idea

    Thank you so much Mr.Pterneas

  • Hanan says:

    Mr.Pterneas,

    My project needs segment object and something like this , so, can I use Matlab with Kinect v2 ??

  • Hanan says:

    but this is not in C#?

  • RP says:

    Can we use the recorded files (.xef) to work on Vitrivius ? Do we need sensor to be connected to PC for development? Any emulators available for development.

    Thanks in advance

    • Hello. .xef is a Microsoft proprietary file format, so we cannot use them outside Kinect Studio.

      However, Vitruvius has its own recorder. You can record a session using your Kinect sensor and play that session without the sensor! It is more flexible than Kinect Studio and Vitruvius playback works without a sensor.

  • Carlos says:

    Hi Vangos, thanks for Vitruvius and for this blog.

    I’m trying to draw only a limited number of limbs. So far, I have managed this by commenting the tuples of some of the joints in the method :
    “public readonly List<Tuple> CONNECTIONS = new List<Tuple> ”
    from the KinectViewer.xaml.cs file from Vitruvius.

    That way, the bones that I wish to ignore are not drawed. However, the points (dots) of the joints still remain.
    How could achieve this? For example, to draw only the right side bones and points.

    • Hi Carlos. You can remove the joint dots by modifying the Create method. Change the following code:

      foreach (var joint in joints)
      {
      bodyVisual.AddJoint(joint, jointRadius, jointBrush);
      }

      … and replace it with this code:

      foreach (var joint in joints)
      {
      if (joint == JointType.Head || joint == JointType.Neck || joint == JointType.SpineBase)
      {
      bodyVisual.AddJoint(joint, jointRadius, jointBrush);
      }
      }

  • Hello Vaggos!

    I would like to ask which is the maximum number of people that Kinect SDK V2 could track?

    Thank you

  • Eleftheria says:

    Hi Vangos,

    Thank you very much for your blog. I was wondering about the accuracy of the skeletons in the Kinect (v2) when the person is lying down. It certainy is not as accurate as the standing position but would you say that acceptable accuracy can be achieved – especially for the limbs? Do you think there is a way to help with that, like by placing the Kinect at a cetrain position with respect to the body? Do you know any other ways this can be improved?

    Thanks ๐Ÿ™‚

    • Hi Eleftheria. Kinect is, indeed, inaccurate when tracking people lying down. There is not much you can do to prevent this. You could place the sensor on the ceiling, however the problem is that the body parts are “merged” with the background (floor).

  • Bahadir says:

    Hi Vangos,
    What does the skeletonMaxX and skeletonMaxY represent in scaleTo method? Because before this method you have used these numbers constant (1.0f) in other scaleTo method. Aren’t these numbers supposed to be variable? Because there are some problems when trying to fix the aspect ratio. Can you explain what do skeletonMaxX and skeletonMaxY symbolise?
    Thanks ๐Ÿ™‚

  • Ibrahim Seleem says:

    Hello, Vangos Pterneas. I used Kinect v2 to measure the ankle and hip position of human, then I applied trigonometric laws in order to get the joint angles. But all angles are wrong. Kinect is at 1.80 m from the human in order to get the ankle trajectory. Is this distance suitable?? if there is error appeared due to this distance how can I compensate it?>>

    Thanks in advance

  • Roq says:

    Hi Vangos!
    How can I store the specific joint data in a database? And which database do I use? Can I use SQL?

    • Hi Roq. This is totally up to you. You can use SQL, .csv, or even plain text. The joint data are just numeric values. So, you can create tables with the corresponding entities. E.g. a table “Body” and a table “Joint” would store almost all of the required information.

  • Golden Lady says:

    Can not wait to read your book :))

    but I have a question
    how can I show the time of the captured body frame in reasonable time (using the Relativetime)???

    also, when want to recognize a gesture using the skeleton frame, is the scaling of the skeleton can solve the problem when the person stand sometimes near to Kinect and other time far from it????

  • Meghal says:

    Hello,
    I am currently tracking headJoint and drawing an ellipse when tracked.
    if(body.IsTracked)
    {
    Debug.WriteLine(“Body is trackd”);
    Joint headJoint = body.Joints[JointType.Head];
    if (headJoint.TrackingState == TrackingState.Tracked)
    {

    Debug.WriteLine(“Head is trackd”);
    DepthSpacePoint dsp = sensor.CoordinateMapper.MapCameraPointToDepthSpace(headJoint.Position);
    Ellipse headCircle = new Ellipse()
    {
    Width = 50,
    Height = 50,
    Fill = new SolidColorBrush(Color.FromArgb(255, 255, 255,0))
    };
    //sensor.Close();
    bodyCanvas.Children.Add(headCircle);
    //countIN++;
    Debug.WriteLine(countIN + “”);
    Canvas.SetLeft(headCircle, dsp.X – 25);
    Canvas.SetTop(headCircle, dsp.Y – 25);
    }

    }

    I want the count of heads tracked each time a new head is tracked and not count hte old ones. How do i do that. Kindly guide!

  • Mechatronika says:

    Dear Vangos,

    I am trying to run two kinect V2 sensors on one PC. In SDK, there is no software that allows from selecting the Kinect or select which one can I receive the stream from it.

    Do you have any software/link which guides me how to run and manage two Kinects V2?

    • Hello. You need to check the property:

      Microsoft.Kinect.KinectSensor.Sensors;
      or
      WindowsPreview.Kinect.KinectSensor.Sensors;

      This list includes all of the connected sensors. You can then create separate frame readers for each sensor. The core process would be the same.

      • Mechatronika says:

        Thank you Vangos. I tried hardly to find out solutions, including yours, and I can conclude from different forums the following:
        – The problem of not linking two Kinect V2 is mainly resulted from limited hardware capabilities.
        – SDK-V2 doesn’t support linking and/or managing running two Kinects V2 on the same PC even if the PC has two independent USB 3 port.
        – WindowsPreview namespace can’t be called in WindowsForm ( I am using C#), since when I tried to find it in the reference-manager, I found that it is only existed in Windows 8.1 ==> Extension. A forum proposed using Microsoft.Kinect.Input as alternative, but I still couldn’t help.
        – Microsoft.Kinect.KinectSensor.Sensors, is compitable with older versions of Kinect 1.8.. but not for V2.

        I am expecting that I need to build a software to activate/deactivate USB ports and call again the connected sensor.. This require too much programming effort to control the USB port. I am wondering whether you can provide me solutions and your opinion regarding my recent work!

  • Jin says:

    hi vangos.
    i am currently using kinect v2. My project is about gesture recognition and my professor wants me to get the angles on each frame by computing the joint positions in order to check whether the exercise performed are really correct from start to finish. Is there a way for me to collect the frames together with the skeleton data? and can i record while someone is doing the exercise together with the skeleton data?

    Thanks!

  • ALEJANDRO says:

    EXCUSE ME, DO YOU KNOW HOW TO TRACK ONE PEOPLE WITH KINECT 2.0???

    ONLY I KNOW THAT IS IN “_body = new Body[frame.BodyFrameSource.BodyCount];”

    WHERE

    BodyCount

    IS THE NUMBER OF PEOPLE FROM TRACK

    • The “_body” variable in my code is not one body; it’s an array of bodies.

      _bodies = new Body[frame.BodyFrameSource.BodyCount];
      frame.GetAndRefreshBodyData(_bodies); // This method fills the array!

      var body = _bodies.Where(b => b != null && b.IsTracked).FirstOrDefault(); // This is how you can get one body of the array.

      • ALEJANDRO says:

        Thank you, but What is the difference between “var body = _bodies.Where(b => b != null && b.IsTracked).FirstOrDefault()” and

        “var body = _bodies.Where(b => b.IsTracked).FirstOrDefault()”, happens that with the first instruction continues to detect more than one body and with the second only the detection time is delayed!!

        I keep trying thank you very much….!!!!

  • Argie says:

    Hello Vangos,

    first of all great blog and great job !! I would like to ask you something. I understand that the spine region needs to be tracked every time in order to be able to find the other body parts, but is it feasible to eliminate some body parts from the hands and not project them throughout our stream. For example not show the wrist and lower.

    Thanks a lot !

    • Argie says:

      Found it ! ๐Ÿ™‚

      • Awesome ๐Ÿ™‚ Actually, you can display only the joints you want to.

        • Argie says:

          May I ask you something else too ? Because I’m not that familiar with C#, and can’t find a way to do it.
          How easy is to store to a text or xml file, the angle while I’m moving on the screen?

          For example I have this angle1.Update(body.Joints[_start1], body.Joints[_center1], body.Joints[_end1], 50); that updates while user is on the screen, and then I’m trying to save this tblAngle1.Text = ((int)angle1.Angle).ToString(); to a file.

          But I can’t do it. Do you have any idea why ?

  • Argie says:

    Worked like a charm !! Thanks !!

  • Azfar Khoja says:

    Hey Thanks Vangos for such a brilliant introduction to body tracking .
    I was using your tutorials and Vitruvius libraries for measuring the height of a person using kinect v2
    The kinect constantly calculates height during a period of 5 seconds and keeps printing this on a text file

    But later when i open the text file i see i get only 70-75 readings each time
    whereas the kinect being rated at 30fps i was expecting around 150 readings for a period of 5 seconds .
    Whats the reason for the drop of frames and how can I rectify it ?

    PS
    My laptop specs are i7 5th gen with 4gb ram

    • Hello. I assume that you are the data in a single file that is left open during the recording process, thus, causing a lagging.

      I suggest you store only one text file per frame, instead of having one single file open all the time. You can merge the files into a single file after you stop recording.

      Hope that helped you!

  • david aragorn telmo says:

    foreach (var skeleton in skeletons)
    {
    // skip the skeleton if it is not being tracked
    if (skeleton.IsTracked != SkeletonTrackingState.Tracked)
    continue;

    // update the gesture controller
    gestureController.UpdateAllGestures(skeleton);
    }
    for kinect v2 sensor.. skeleton in is code body.. please help me that is my problem about [[[[[ if (skeleton.IsTracked != SkeletonTrackingState.Tracked)
    continue; ]]]]]]

  • Yahya says:

    Hi, I need your help. Suddenly my canvas canโ€™t resized with my rule.


    <Image Name="camera" HorizontalAlignment="Left" Height="700" Margin="10,10,0,0" VerticalAlignment="Top" Width="940"/>
    <Canvas Name="canvas" HorizontalAlignment="Left" Height="700" Margin="10,10,0,0" VerticalAlignment="Top" Width="940"/>

    I set the Image like that but the Canvas still showing the skeleton output with Height 1080 and Width 1920.

    What should I do? Thank you

    • This is normal since you are specifying a different width and height. Place the Canvas and Image within a Viewbox:


      <Viewbox Width="940" Height="700" Margin="10,10,0,0" HorizontalAlignment="Left" VerticalAlignment="Top">
      <Grid>
      <Image Name="camera" Width="1920" Height="1080" />
      <Canvas Name="canvas" Width="1920" Height="1080" />
      </Grid>
      </Viewbox>

  • Ryan says:

    Hi,
    I am tracking and drawing Joint points on actual joints position of body and I am facing a problem for tracking accurate joint positions for knee, ankle and foot when foot is on ground. When I lift the foot from the ground i can see that joints are on correct positions but I am facing issue while tracking on ground level.
    Please provide your any suggestion for this problem.

    • Hi Ryan. You can try to tilt the sensor manually and place it between 1.0 and 1.5 meters. Knee, ankle, and foot tracking is not as accurate as shoulder and spine tracking.

  • Raaj says:

    Please tell me how to call the DrawPoint and DrawLine Method. New to using Kinect.

  • Muhammad Hammad says:

    Hi,

    As Kinect-Xbox One can identify up-to 25 joints. I want to figure out the list of joints which can be measured quite accurately by Kinect Xbox one.
    Please let me know about it.

    Thanks

    • Hello Muhammad. The following joints are tracked more accurately:

      Head
      Neck
      SpineShoulder
      SpineMid
      SpineBase
      ShoulderLeft
      ShoulderRight
      ElbowLeft
      ElbowRight
      WristLeft
      WristRight
      HipLeft
      HipRight
      KneeLeft
      KneeRight
      AnkleLeft
      AnkleRight

      • Muhammad Hammad says:

        Thanks Pterneas for informing me about this important information.

        Yesterday, I have read your other blogs as well and I came to know different information in one of your blogs [https://pterneas.com/2017/05/28/kinect-joint-rotation/].

        It is mentioned that “Head” cannot be accurately measured. Similarly, you have included “ShoulderLeft/Right” in that list. “AnkleLeft/Right” and “SpineMid” are not in that list. Please confirm me about the following:

        1. Is the recent list that you provided in this blog is accurate?
        2. Is that list based on some posture like standing, sitting or lying?

        Please let me know about it, I will be thankful to you.

        • Muhammad Hammad says:

          Sorry, please make correction that “ShoulderLeft/Right” has no issue

        • Hello Muhammad. In this blog post, I am talking about the tracking accuracy. The other blog post you are mentioning is talking about the rotation accuracy. Tracking refers to position.

          So, e.g. the Head joint has good tracking accuracy, but low rotation accuracy.

  • Rakshitha says:

    I have a question in kinect sensor v.2 how to make the screen to focus on center and to detect the body present in the center of the screen…….using coordinate mapping system .in visual studio

    • Hello. This is quite simple. All you need to do is check the X value of the SpineBase position. Then, you need to check whether the X value is close to the 0,0 point of the cartesian system.


      CameraSpacePoint spineBase = body.Joints[JointType.SpineBase].Position;

      float x = spineBase.X;

      if (x > -0.1f && x < 0.1)
      {
      // The person is close to the center.
      }

  • Chien says:

    Hi Vangos,
    I would like to ask you a question. If multi-frame reader will be called whenever a new stream is coming. It can be depth, color, body, or infered.
    So, if a Frame coming, maybe I still can not get the body value right?. The body values may have the result each 3 or some number of new frames coming, is it right?
    Thank you,
    Chien

    • Hi Chien. A multi-frame may include all of the frame types, two of the frame types, one frame type, or no frame type. This is why I am always checking for null values.

  • Antonio says:

    Hi Vangos

    One fast question, why in:


    _displayBody

    you did not use it as an enum?

  • nazsh says:

    how to track inner body of a person dresses up in front of kinect sensor??

  • Ambreen says:

    Dear Sir

    thanks for support how to save color depth body image and video using this example

    • Hello Ambreen. To save the color, depth, and body streams, you need to encode them in the proper format. It is feasible, but not trivial. You need to save the Color stream as JPG images, the Depth stream as Binary files, and the Body stream as text/JSON files.

      You can do that in the foreground thread (which will cause lagging) or in a background thread (which will be smooth).

      In case you are interested, I have developed Vitruvius. Vitruvius Academic and Premium allow you to record all of the Kinect streams using just 2 lines of C# code. Also, Vitruvius includes a video player, so you can visualize the captured results.

  • Markus says:

    Hello,
    thanks a lot for this great program and explanation.
    I was wondering if it’s possible to display only the two shoulder joints and draw a line between those (instead of having the whole skeleton).
    I managed to receive the values for the X,Y & Z coordinates of the joints for further calculations but I would like to include a visual representation.

    Best regards,
    Markus

    • Hi Markus. Sure, you can only draw the joints you want. No need to loop through the Dirctionary. Simply acquire the 2D coords of the desired joints and plot them on a Canvas using Ellipse or other visual elements.

      • Shawn says:

        Thanks for your fast response but I`ve two doubts.
        1. I have integrated the code to measure the distance, but the distance is not displayed.
        using (var frame = reference.BodyFrameReference.AcquireFrame())
        {
        if (frame != null)
        {
        var bodies = frame.Bodies();

        _playersController.Update(bodies);

        Body body = bodies.Closest();

        if (body != null)
        {
        var joint1 = body.Joints[JointType.KneeLeft];
        var joint2 = body.Joints[JointType.KneeRight];

        var distance = MathExtensions.Length(joint1.Position, joint2.Position);

        viewer.DrawBody(body);

        angle1.Update(body.Joints[_start1], body.Joints[_center1], body.Joints[_end1], 50);
        angle2.Update(body.Joints[_start2], body.Joints[_center2], body.Joints[_end2], 50);

        tblAngle1.Text = ((int)angle1.Angle).ToString();
        tblAngle2.Text = ((int)angle2.Angle).ToString();
        }
        }
        }
        }

        void UserReporter_BodyEntered(object sender, PlayersControllerEventArgs e)
        {
        }

        void UserReporter_BodyLeft(object sender, PlayersControllerEventArgs e)
        {
        viewer.Clear();
        angle1.Clear();
        angle2.Clear();

        tblAngle1.Text = “-“;
        tblAngle2.Text = “-“;
        }
        }
        2. In the xaml, I get a fixed line segment. It did not following the two joints movement.

        Please help me, thank you.

  • Seleem says:

    Hello, thanks for you great website.

    I need to save the body joint positions and save the time of executing this joint at each movment.

    I tried to use “StopWatch” as follows:

    Stopwatch stopWatch = new Stopwatch();

    Joint myjoint1 = body.Joints[JointType.WristRight];

    stopWatch.Start();

    float x1 = myjoint1.Position.X;
    float y1 = myjoint1.Position.Y;
    float z1 = myjoint1.Position.Z;

    stopWatch.Stop();
    // Get the elapsed time as a TimeSpan value.
    TimeSpan ts = stopWatch.Elapsed;

    But the time appeared is not real-time. Please, I need your help to get the real-time at each joint position ??

Leave a Reply to Vangos Pterneas Cancel Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.