Skip to main content

Any weight-lifters reading this blog? If you are trying to combine technology with your daily weight-lifting exercise, look no further. Today, I’ll show you how to recognize a weight-lifting bar and display its distance from the floor using Kinect. This way, you’ll be able to track your progress and check how great you are performing, day after day. Here is a video of what we are going to accomplish after finishing this tutorial — please keep in mind that I’m an engineer, not a body-builder!

This is how the app can track the bar and the plates:

Kinect Weight LifterCool, huh? Let’s see how to implement it in code…


Video & Source Code

As usual, I’m providing you with the complete source code, as well as a demo video.

The Bar Detection Algorithm

The heart of our algorithm is the BarDetectionEngine class. This class encapsulates all of the required details I’m going to cover below. I am not going to present the complete source code, since it’s more than 500 lines of C# code. I am rather going to explain the thought process and the logic behind the code.

Kinect can recognize human body joints out-of-the-box. Body tracking is built-in to the SDK. However, there is no built-in way to recognize physical objects (like weight-lifting bars). So, how are we going to tackle this?

Kinect is, primarily, a depth processor. A depth processor can accurately specify the distance between any physical point in the 3D space and the device! To detect the bar, we simply need to find the depth points that it’s constituted of. As I described in my introductory article, the Depth frame is just a collection of raw distance values. It’s expressed as a 512×424 array of floating-point numbers. To increase the performance of the application, I’ll be using pointers, so remember to enable the “unsafe code” capability from the Visual Studio options:

Visual Studio Optimize Code

In short, the algorithm consists of the following steps:

  1. Get the array of the raw depth data
  2. Get the array of the body-index data
  3. Detect the position of the Hand joints
  4. Find the depth points that have the same distance to the hand joints
  5. Exclude the depth points that belong to the body
  6. Smooth the values and set thresholds
  7. Process the resulting points to calculate the distance/height/angle of the bar

Step 1 – Get the array of the raw depth data

Traditional object tracking is accomplished by processing RGB color images. Kinect offers an incredible means to understand the environment. This means is called “depth processor”. Using its laser sensor, Kinect can provide us with a point cloud of 512×424 points, along with the horizontal distance between each point and the sensor plane.

Each point of the depth array is nothing but a raw distance value (measured in millimeters).

To track the bar, we’ll need to detect the points that have a similar distance to the hands.

Here’s how we can access the depth array:

using (var depthFrame = frame.DepthFrameReference.AcquireFrame())
    if (depthFrame != null)

Step 2 – Get the array of the body-index data

The Kinect SDK provides us with a handy frame source: BodyIndex. BodyIndexFrames specify which of the aforementioned depth points belong to a human body! Combining the Depth and BodyIndex frames, we can extract the distance of every point that belongs to a person.

Accessing the body-index data throught C# code is pretty simple:

using (var bodyIndexFrame = frame.BodyIndexFrameReference.AcquireFrame())
    if (bodyIndexFrame != null)

Step 3 – Detect the position of the Hand joints

Detecting the position of the player’s hands is straightforward, too. If you want to learn more about joint tracking using Kinect, follow my previous tutorial.

var handLeft = body.Joints[JointType.HandLeft].Position;
var handRight = body.Joints[JointType.HandRight].Position;

Step 4 – Find the depth points that have the same distance to the hand joints

So, we now have the position (X, Y, Z) of the hands in the 3D space. We also know the Z distance of every point within our field of view! Since the weight-lifting bar is grabbed by the hands, its points would be close to the hand points. So, we’ll loop through the depth array and find those points that have a similar Z-distance.

ushort depth = depthData[depthIndex];

Step 5 – Exclude the depth points that belong to the body

Even though the above code should work, there is one major issue: when the bar is very close to the body, the algorithm picks depth points that belong to the chest, hips, or shoulders! Obviously, these points do not belong to the bar. That’s exactly why we need the body-index frame! The body-index frame determines whether a specific point belongs to the body or not. As a result, we can exclude the points that belong to the body and keep the rest of them. That’s the code that does the trick:

if (bodyIndexData[depthIndex] != 0xff)
   // This point does not belong to a player.

Step 6 – Smooth the values and set thresholds

Optionally, we can smooth the values and specify minimum/maximum distances or bar rotations. In my code, I have set a threshold of 25 degrees bar rotation.

Check the complete BarDetectionEngine.cs file.

Step 7 – Process the resulting points to calculate the distance/height/angle of the bar

Since we have the depth points that belong to the bar, we can now calculate its length by taking the leftmost and rightmost points. We can use Coordinate Mapper to find the coordinates of the points in the 2D color space, too.

Kinect Weight Lifting BarAnd, this is it, folks. You can explore the source code to see how I’m visualizing the bar and how I measure various other properties. The purpose of this tutorial is to showcase the power of processing raw depth frames using Kinect SDK v2.

Using the BarDetectionEngine

All of the above functionality has been encapsulated into a single class – BarDetectionEngine. You can use the BarDetectionEngine as follows:

// 1) Initialization
var barDetectionEngine = new BarDetectionEngine(sensor.CoordinateMapper, colorWidth, colorHeight, depthWidth, depthHeight);
barDetectionEngine.BarDetected += BarDetectionEngine_BarDetected;
// 2) Update
barDetectionEngine.Update(depthData, bodyIndexData, body);
// 3) Event handling
private void BarDetectionEngine_BarDetected(object sender, BarDetectionResult e)
    if (e != null)
        var center = e.Trail;
        var height = e.BarHeight;
        var length = e.barLength;
        var left = e.Minimum;
        var right = e.maximum;

“Wait! Do I need a weight lifting bar to try this?”

Well, not really. You do not need to have a weight lifting bar in your house or office. An ordinary broomstick would do the job just fine!

‘Til the next time, keep Kinecting!

PS: Vitruvius

If you enjoyed this article, consider checking Vitruvius. Vitruvius is a set of powerful Kinect extensions that will help you build stunning Kinect apps in minutes. Vitruvius includes avateering, HD Face, background removal, angle calculations, and more. Check it now.

Want to hire me for your next cutting-edge fitness app? Drop me an email.

Vangos Pterneas

Vangos Pterneas is a software engineer, book author, and award-winning Microsoft Most Valuable Professional (2014-2019). Since 2012, Vangos has been helping Fortune-500 companies and ambitious startups create demanding motion-tracking applications. He's obsessed with analyzing and modeling every aspect of human motion using AI and Maths. Vangos shares his passion by regularly publishing articles and open-source projects to help and inspire fellow developers.


    • Aaron says:

      Thanks for all of your articles. I’m might try/revise this latter – though reading through the .cs might be a chore.
      I’ve been banging my head against the wall this week trying to track a colored stick.
      I’m trying to track hand rotation(which as you know kinect body does poorly) so I altered my project and am using red sticks in hand – which so far I’m failing to map properly – I’m missing something.
      Anyway, would it be easier to track a specific color on a stick, mapping the camera space to the color space – or should i use your depth driven solution.

      • Hello Aaron. Thank you for your message. If you are planning to rely on the color stream, use EmguCV, which has pretty good algorithms for line/shape detection. If the stick is too narrow, the depth sensor may not easily detect it. You could also combine the results of the color and depth shape detection.

        • Aaron says:

          I just came back to this after a year and playing with emgucv(too slow converting Image)
          now this blog post is much clearer. I’m using short sticks to get hand rotation and this blog post
          is very helpful. I’m using this especially the depth table trick(similar depths to find the contours).
          Do you think Kinect will survive? Is MS sticking with it?

  • Satyajit says:

    After downloading the zip file and uncompressing, when I try to build the file in Visual Studio the code doesn’t work.
    I’ve ensured that Microsoft.Kinect is properly referred to. I’ve also enable unsafe code. Some of the errors shown include:
    ‘CameraSpacePoint’ couldn’t be found.
    ‘DepthSpacePoint’ couldn’t be found.
    ‘MultiSourceFrameReader’ couldn’t be found.

    Studio seems to suggest I’m missing a directive or Assemble file and all of these functions seems to be under the Microsoft.Kinect.dll, yet I’m sure I’ve added it correctly. Any suggestions?

  • Satyajit says:

    I didn’t see that. If you know about any similar code for Kinect V1 can you please refer me to it? Thank you so much.

  • Klara says:


    I would like use the Kinect for calculate the Forearm-Pronation-Supination with the Kinect 2.0, but I need for this Calculation 3 Points in a room (motion_capture system VICON). For example thumb_right and 2 x wrist_right, but i have one Point for the wrist_right. The idea, which i have is a staff/bar/stick that is longer as the wrist_right. So i saw your video on youtube and i downloaded this code. But i can´t use breakpoints (no symbole loaded). So I would like the staff for rotation my forearm. If i can tracked the staff I have 2 Points and the third point is the wrist_right. So i can calculated the Forearm-Pronation-Supination.

    My problme is, that i don´t tracked object and i would need your code, but whithout breakpoints, it is difficult to accomplish by your code.

    Can you me follow what idea i have? (sry German people, english is bad)

  • Klara says:


    I need a code for object tracking (c# and wpf). I would like tracked a staff and get the coordinaten from him. Because i will calculate the forearm rotation. The first idea was i need the thumb, but the tracking is inaccurate.

    I would like use your code, but i can´t adapt it so that i tracked a small staff (Lenght 30 cm, width 5 cm). Do you have any idea how i can do this (novice programmers)?

    • Hi Klara. You would better use OpenCV or EmguCV for such a purpose.

      • samit says:

        hi i got this error, when i debug your program.

        A project with output class cannot be start directly.

        In order to debug this project, add an executable project to the solution with references the library project. Set the executable project as the starting project.

        • Hello. Navigate to the Solution Explorer window and right-click the project named “WeightLifting”. Select “Set as startup project” and it should be OK.

          • samit says:

            It is not detecting properly, could please help me how can i improve it’s sensitivity.

          • Hi Samit. What is the problem you are facing?

          • samit says:

            Kinect is not detecting bar properly. Bar is not horizontal position. It is in inclined position. I am not sure what to do.

          • Hi Samit. The algorithm is optimized to track a horizontal bar with a small 10-degree tilting. To track bars placed vertically, you’ll need a different algorithm.

            You could probably identify the depth points close to the body that do not belong on the player and are above the head or below the SpineBase.

          • samit says:

            No, No. I am using your code without any changes and i want it is horizontal position not in vertical.

            I donot know it is showing me in inclined position.

            Picture you upload on this page it is totally horizontal mine, It is not.

          • Try to clear your background, just in case the sensor detects additional objects.

          • samit says:

            I cannot find where you did this,
            Step 6 – Smooth the values and set thresholds
            Optionally, we can smooth the values and specify minimum/maximum distances or bar rotations. In my code, I have set a threshold of 25 degrees bar rotation.
            Check the complete BarDetectionEngine.cs file.

            25 where,, sir ??

          • You can modify the following line:

            if (barLength > handLength && Math.Abs(angle) < 25.0)

  • alejandro mises says:

    Some programs and if they work, however, with this brand me an error in the “BarDectectionEngine” file specifically in the last line “BarDetected .Invoke (this, result);” can hopefully help…

    • Hi Alejandro. It seems that you are using an earlier version of C#. To fix the error, replace the following lines:

      BarDetected?.Invoke(this, result);

      with this:

      if (BarDetected != null)
      BarDetected(this, result);

  • Eve Deeza says:

    Hi Vangos,

    I came across this blog recently and successfully built the solution in VS 2015. Do I need to make any changes to the code before building? I’m using a shorter bar just to test the tracking and all I see on the computer is myself. The bar tracking doesn’t show up. Is there a bar size/distance that is optimal for the default settings for the code? I was running it as is with no adjustments.

  • Malik says:

    This article totally inspires me! Thank you!

Leave a Reply to Klara Cancel Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.