Skip to main content

Body Tracking for desktop & mobile?

Kinect is an exceptional depth sensor to work with. However, Microsoft has repeatedly been discontinuing the devices. If you want a super-accurate Body Tracking SDK that works with all cameras, desktop computers, and mobile devices, check LightBuzz AI. LightBuzz is the result of five years of R&D, and I’m sure you’ll enjoy it very much.

Today, I’m going to share a little trick that will save you a ton of time when working with Kinect: how to properly measure the distance between the sensor and a body or object. Surprisingly, I was asked the exact same question by three of my blog readers during the past 10 days!

First, some good-to-know background.

How Kinect measures distances

As you know, Kinect integrates an infrared sensor, along with a depth processor. The visible area is called “field of view”. The depth processor produces depth frames. Each depth frame is a grid of 512×424 points. The Kinect SDK is then feeding the depth frames to a powerful body-detection algorithm. The algorithm identifies 25 human body joints and calculates their coordinates in the 3D space.

Every single joint has 3 values: X, Y, and Z. It is projected in a Cartesian coordinate system. The (0, 0, 0) point is the position of the sensor. Every other point is measured in terms of the position of the sensor! Check the overhead graphic below. It’s viewing the sensor and the scene from above.

  • X is the position in the horizontal axis.
  • Y is the position in the vertical axis.
  • Z is the position in the depth axis.

Kinect Distance - Depth

Making sense of the Z value

X and Y are fairly easy to understand. That’s not true for Z. If you take a close look at the graphic above, you’ll notice that the Z value is not the linear distance between the point and the sensor. Instead, it’s the distance between the point and the plane of the sensor! It’s like having a virtual wall right in front of the Kinect. The Z value is the vertical distance from that wall (drawn in green).

To access the Z value, use the code below:

var distance = body.Joints[JointType.SpineBase].Position.Z;

Simple, huh? The “body” variable represents the tracked player. The “Position” property is a set of X, Y, and Z coordinates in the 3D space. The “Z” value is the distance between the player and the plane. I’ve used the SpineBase joint as a reference, since it’s the most reliably-tracked joint, and it’s also closer to the center of mass. You can read more about processing body data in my body-tracking tutorial.

Measuring the distance

How about the physical distance, though? The distance between the player and the device is represented by a mathematical vector (drawn in blue). Thankfully, we do not need to be gurus in Algebra to measure the length of a vector in the 3D space.

The length of a vector is given by the formula below:

Kinect Distance - Vector Length

The SDK provides us with all of the required values. All we have to do is convert the formula into working C# code:

public double Length(CameraSpacePoint point)
{
	return Math.Sqrt(
		point.X * point.X +
		point.Y * point.Y +
		point.Z * point.Z
	);
}

Bringing it all together:

var point = body.Joints[JointType.SpineBase].Position;
var distance = Length(point);

This is it! You now know exactly how far or how close someone is to the Kinect sensor! Of course, if someone is standing right in front of the sensor, the Z value and the distance would converge.

Hint: in case you need to measure distances of points that do not belong to the human body, you can use Coordinate Mapping.

Summary

Kinect provides us with the X, Y, and Z coordinates of the human body joints.

  • To measure the distance between the player and the plane of the sensor, we just use the Z value as-is.
  • To measure the distance between the player and the sensor, we calculate the length of the corresponding vector.

PS: Vitruvius

If you liked this post, then you’ll love Vitruvius. Vitruvius is the result of my Kinect research during the past 4 years. Vitruvius will help you minimize the development time and create tough applications with just a few lines of code! Includes advanced Mathematics, Avateering, Video Recording, Face Tracking, and more.

Download Vitruvius

‘Til the next time, keep Kinecting!

Vangos Pterneas

Vangos Pterneas is a software engineer, book author, and award-winning Microsoft Most Valuable Professional (2014-2019). Since 2012, Vangos has been helping Fortune-500 companies and ambitious startups create demanding motion-tracking applications. He's obsessed with analyzing and modeling every aspect of human motion using AI and Maths. Vangos shares his passion by regularly publishing articles and open-source projects to help and inspire fellow developers.

60 Comments

  • Unless Math.Pow gets inlined by the compiler (guess it doesn’t), better use x*x + y*y + z*z

    Also if one can work with distance2 (distance squared), say if they just want to compare distances, instead of distance itself, they can avoid the cost of the square root calculation.

  • Arunraj says:

    Dear Author, will you please provide me a tutorial for calculating the distance between two joints (out of 25) when a person is standing still?

    • Using Vitruvius, you can call the Length() extension method:

      At the head of your file:


      using LightBuzz.Vitruvius;

      Inside the FrameArrived event:


      var joint1 = body.Joints[JointType.KneeLeft];
      var joint2 = body.Joints[JointType.KneeRight];
      var distance = joint1.Length(joint2);

      • Just as an aside, it seems that for the Kinect V2 library, this would be:

        At the head of your file:


        using LightBuzz.Vitruvius;

        Inside the FrameArrived event:


        var joint1 = body.Joints[JointType.KneeLeft];
        var joint2 = body.Joints[JointType.KneeRight];
        var distance = MathExtensions.Length(joint1.Position,joint2.Position);

  • Eric says:

    Thank you so much for your tutorial.
    How about to find the distance between an object and Kinect? Here you used human body (player)

    • Hello Eric. If you’ve already detected an object in the 3D space using the depth frame array, you already know its distance from the sensor: it’s the ushort value of the depth array (measured in millimeters).

  • manideep says:

    Hi sir, i am using kinect xbox 360 sensor with projector side by side, i want to calculate the projected screen dimensions. Can you please help me to find out.

  • Ali says:

    Hi everybody
    I am working with Kinect to track the face. I need to find the distance between face and Kinect sensor but I don’t know how can I do that.
    I use lots of different command, but always I have some errors and some context that not exist on my code.
    I tried also this method that is mentioned above, but it doesn’t work also.
    Could you please help me?

    • Ali says:

      Also I am using C# and I am a beginner in C#.

    • Ali says:

      Honestly if I find the Depth, it will be enough for me.

    • This should work for you:


      var head = body.Joints[JointType.Head].Position;
      var distance = Length(head);

      • Ali says:

        Thank you for your answer,
        but I tried it before and it didn’t work.
        because it doesn’t know body and it is just face tracker.
        I have this error always:
        Severity Code Description Project File Line Suppression State
        Error CS0103 The name ‘body’ does not exist in the current context FaceTrackingBasics-WPF

  • Ali says:

    yes I know.
    But the problem is, how can I take the Z value(with which command)?
    also my code doesn’t know Z?

    • Every face point is a set of X, Y, and Z coordinates. It should be yourPoint.Z or yourPoint.Position.Z.

      • Ali says:

        Unfortunately my codes doesn’t know the Z as a coordinate.

        • Ali says:

          finally I found the answer:

          using (var skeletonFrame= e.OpenSkeletonFrame())
          using (var depthFrame = e.OpenDepthImageFrame())
          {
          skeletonFrame.CopySkeletonDataTo(skeletons);
          var skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];

          foreach (var skeleton in skeletons)
          {
          if (skeleton.TrackingState != SkeletonTrackingState.Tracked) continue;

          var head = skeleton.Joints[JointType.Head];
          if (head.TrackingState == JointTrackingState.NotTracked) continue;

          var depthImagePoint = depthFrame.MapFromSkeletonPoint(head.Position);

          int depthInMillimeters = depthImagePoint.Depth; // TADA!
          }
          }

  • kaushik h says:

    Hello Vangos,
    Great tutorial,is the distance returned in meters.is there a way i can also detect the height of my hand.Does this code also work for SDK 1.8(xbox 360).
    looking forward to your reply.
    regards
    Kh

  • Ali says:

    Hi,
    I’m using matlab fro kinect (and some experiences of using C++) .
    I wonder if I can connect two kinects and use them at the same time.(with using different buses for not having traffic problems)
    Can you help me with that ?
    Thanks in advance!

  • SriCK says:

    Hello Vangos, Is there a way to measure the distance between the shoulder and arm of the human?

  • Hamza Abbasi says:

    my Final year project is 3D SLAM and i am using RTAB MAP for generating 3D MAP of unknoen environment can you please guide me how i can calculate distance between objects on a 3D map generated through RTAB MAP

  • dammy says:

    Hi, I am trying to calculate the velocity of a person in front of the kinect using the right ankle. I’ve been doing this for a while but its just showing a wrong velocity value. This is my code. Please help me check where i’m going wrong. do i need to involve a frame no in my calculation or is the distnace not in metres. please someone should help me out.

    using Microsoft.Kinect;
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Windows;
    using System.Windows.Controls;
    using System.Windows.Data;
    using System.Windows.Documents;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using System.Windows.Navigation;
    using System.Windows.Shapes;
    namespace Kinect_app
    {
    ///
    /// Interaction logic for MainWindow.xaml
    ///
    ///
    public struct JointData
    {
    public double x;
    public double y;
    public double z;
    public double time;
    }
    public partial class MainWindow : Window
    {
    private KinectSensor sensor;
    MultiSourceFrameReader _reader;
    IList bodies;
    JointData jd;
    public MainWindow()
    {
    InitializeComponent();
    }
    private void Window_Loaded(object sender, RoutedEventArgs e)
    {
    sensor = KinectSensor.GetDefault();
    if (sensor != null)
    {
    sensor.Open();
    bodies = new Body[sensor.BodyFrameSource.BodyCount];
    _reader = sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color |
    FrameSourceTypes.Body);
    _reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;
    }
    }
    private void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
    {
    var reference = e.FrameReference.AcquireFrame();
    using (var frame = reference.ColorFrameReference.AcquireFrame())
    {
    if (frame != null)
    {
    camera.Source = frame.ToBitmap();
    }
    }
    using (var frame = reference.BodyFrameReference.AcquireFrame())
    {
    if (frame != null)
    {
    canvas.Children.Clear();
    var time = frame.RelativeTime;
    bodies = new Body[frame.BodyFrameSource.BodyCount];
    frame.GetAndRefreshBodyData(bodies);
    foreach (var body in bodies)
    {
    if (body != null)
    {
    canvas.DrawSkeleton(body);
    if (body.IsTracked)
    {
    Joint rAnkle = body.Joints[JointType.AnkleRight];
    double x = rAnkle.Position.X;
    double y = rAnkle.Position.Y; // Final position of x coordinate
    double z = rAnkle.Position.Z;
    double timeInSec = time.TotalSeconds; //This means the total time in sec
    if (jd.x == 0 && jd.y == 0 && jd.z == 0) // this means the time at initial position
    {
    jd.x = x; // this means initial x coordinate position
    jd.y = y;
    jd.z = z;
    jd.time = timeInSec; // assign new value to jd.time which equal the initial time
    }
    else
    {
    double elapsedTime = timeInSec – jd.time; // we calculate the elapsed time since the ankle already shifted
    double x_2 = Math.Pow((x – jd.x), 2);
    jd.x = x;
    double y_2 = Math.Pow((y – jd.y), 2); // this means final – initial
    jd.y = y;
    double z_2 = Math.Pow((z – jd.z), 2);
    jd.z = z;
    double distance = Math.Sqrt(x_2 + y_2 + z_2);
    double velocity = distance * 30 / elapsedTime;
    Velocity.Content = velocity.ToString(“#.###”);

    }
    }
    }
    }
    }
    }
    }
    }
    }

    • Hello. You would better use the frame count to determine the velocity, instead of using the actual time in seconds. The frame rate is not fixed — it’s between 15 and 30 frames per second.

      • dammy says:

        okay thanks. I used the frame rate but I’m still not getting it right.
        Thanks for your reply

        • Could you please provide an example of the output values compared to the expected values?

          • dammy says:

            I’m getting 0.007 as the velocity value which is not correct logically
            The output value of the velocity keeps on increasing even when the person detected is not moving.
            could you please help me with a sample code on how to use the frame count to determine the joint velocity

          • You’ll need to reset the timer and measurements. Velocity is increasing because the Dx value is increasing. You need to determine when to stop the measurements.

    • David Castillo says:

      Hola Dammy, me paree interesante el ejerccio que tu realizaste, me podrias ayudar con el codigo paa revisarlo, estoy interesado en algo similar.
      davecastle73@hotmail.com
      Saludis

  • dammy says:

    okay thanks

  • Ahmed says:

    Hello Vangos Pterneas,

    I am using Kinect Sensor Xbox one i am new to this i am trying to work on hand gesture but i could not find any help , do you have any code for that so i can test different hand gesture

    please help me i am new to programming.

    Thank you,

    best regards,

    Ahmed

  • Ahmed says:

    Your code***

  • Andressa Leite says:

    Hello. What unit of measure of length of points sdk v1?

  • Ahmed says:

    Hi,
    is it possible to feed previously recorded depth images to the body detection algorithm for research purposes?

  • jansett says:

    Goodevening. I am a computer science student and im currently taking a reaserch on how to measure a human body parts using kinect. for example waist. May I ask if you know how to encode it. and what codes are needed to do it? thank you for your help

  • Giorgi Japiashvili says:

    Hello Vangos,

    Is there any way to process the data coming from 2 different Kinect v2 sensors connected to one computer? I want to increase the field of view by using 2 different sensors to track the same body.

    • Hi Giorgi. The drivers do not allow you to have 2 Kinect sensors connected on the same computer. The drivers will only recognize one of them. To use many sensors, you’ll need many computers that communicate using sockets.

  • Akhil Nalliboina says:

    I want to operate or control a button in specific distance, how can we do that with the help of your code?

  • Hi, I am writing a research paper and i am encountered with a problem. I want to show coordinates and z parameter on real time acquisition on screen . I’ve implied many techniques but fail to do so…..please help me out .Looking forward to your response …

    Regards,

  • Neza says:

    Hello,
    how to implement this code in hand tracking code?

  • Sravan Kumar Challa says:

    Thanks for the great tutorial.

    i have collected human treadmill walking data using kinect v2 (kinect placed in-front of treadmill). i have 3d coordinates for hip , knee, ankle and foot. Now I want to calculate angles between them separately while walking. Could you please help me out how to calculate hip, knee, and ankle trajectories.

    Thank you…

Leave a Reply to Ali Cancel Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.