Skip to main content

Throughout the past few days, I got many requests about Kinect color to depth pixel mapping. As you probably already know, Kinect streams are not properly aligned. The RGB and depth cameras have a different resolution and their point of view is slightly shifted. As a result, more and more people have been asking me (either in the blog comments or by email) about properly aligning the color and depth streams. The most common application they want to build is a cool green-screen effect, just like the following video:



As you can see, the pretty girl is tracked by the Kinect sensor and the background is totally removed. I can replace the background with a solid color, a gradient fill, a random image, or even a video! This example demonstrates how to use a video background and turn your Kinect apps into a full-featured home entertainment system!

Nice, huh? So, I created a simple project that maps a player’s depth values to the corresponding color pixels. This way, I could remove the background and replace it with something else. The source code is hosted on GitHub as a separate project. It is also part of Vitruvius.

Read the tutorial to understand how Kinect coordinate mapping works and create the application by yourself.


How background removal works

When we refer to “background removal”, we need to keep the pixels which form the user and remove anything else that does not belong to the user. The depth camera of the Kinect sensor comes in handy for determining a user’s body. However, we need to find the RGB color values, not the depth distances. We need to specify which RGB values correspond to the user’s depth values. Confused? Please don’t.

Using Kinect, each point in space has the following information:

  • Color value: Red + Green + Blue
  • Depth value: The distance from the sensor

The depth camera gives us the depth value and the RGB camera provides us with the color value. We map those values using CoordinateMapper. CoordinateMapper is a useful Kinect property that determines which color values correspond to each depth distances (and vice-versa).

Please note that the RGB frames (1920×1080) are wider than the depth frames (512×424). As a result, not every color pixel has a corresponding depth mapping. However, body tracking is performed primarily using the depth sensor, so no need to worry about missing values.

The code

In the GitHub project I shared, you can use the following code to remove the background and get the green-screen effect:

void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
    var reference = e.FrameReference.AcquireFrame();
    var colorFrame = reference.ColorFrameReference.AcquireFrame();
    var depthFrame = reference.DepthFrameReference.AcquireFrame();
    var bodyIndexFrame = reference.BodyIndexFrameReference.AcquireFrame();
    if (colorFrame != null && depthFrame != null && bodyIndexFrame != null)
        // Just one line of code :-)
        camera.Source = _backgroundRemovalTool.GreenScreen(colorFrame, depthFrame, bodyIndexFrame);

As you can see, the whole magic is relying on a single the BackgroundRemovalTool class. We need to be aware of the color frame data, the depth frame data and, of course, the body data, so to remove the background.

The BackgroundRemovalTool class has the following arrays of data:

  • WriteableBitmap _bitmap: The final image with the cropped background
  • ushort[] _depthData: The depth values of a depth frame
  • byte[] _bodyData: The information about the bodies standing in front of the sensor
  • byte[] _colorData: The RGB values of a color frame
  • byte[] _displayPixels: The RGB values of the mapped frame
  • ColorSpacePoint[] _colorPoints: The color points we need to map

It also uses a image source (WriteableBitmap) for creating the final bitmap image. The CoordinateMapper is passed as a parameter from the connected Kinect sensor.

Let’s head to the GreenScreen method. Firstly, we need to get the dimensions of each frame (remember, frames have different widths and heights):

// Color frame (1920x1080)
int colorWidth = colorFrame.FrameDescription.Width;
int colorHeight = colorFrame.FrameDescription.Height;
// Depth frame (512x424)
int depthWidth = depthFrame.FrameDescription.Width;
int depthHeight = depthFrame.FrameDescription.Height;
// Body index frame (512x424)
int bodyIndexWidth = bodyIndexFrame.FrameDescription.Width;
int bodyIndexHeight = bodyIndexFrame.FrameDescription.Height;

Then, we need to initialize the arrays. Initialization happens only once, so to avoid allocating memory every time we have a new frame.

if (_bitmap == null)
    _depthData = new ushort[depthWidth * depthHeight];
    _bodyData = new byte[depthWidth * depthHeight];
    _colorData = new byte[colorWidth * colorHeight * BYTES_PER_PIXEL];
    _displayPixels = new byte[depthWidth * depthHeight * BYTES_PER_PIXEL];
    _colorPoints = new ColorSpacePoint[depthWidth * depthHeight];
    _bitmap = new WriteableBitmap(depthWidth, depthHeight, DPI, DPI, FORMAT, null);

We now need to populate the arrays with new frame data. Before doing so, we check that the array lengths correspond to the dimensions we found earlier:

if (((depthWidth * depthHeight) == _depthData.Length) &&
   ((colorWidth * colorHeight * BYTES_PER_PIXEL) == _colorData.Length) &&
   ((bodyIndexWidth * bodyIndexHeight) == _bodyData.Length))
    // Update the depth data.
    // Update the color data.
    if (colorFrame.RawColorImageFormat == ColorImageFormat.Bgra)
        colorFrame.CopyConvertedFrameDataToArray(_colorData, ColorImageFormat.Bgra);
    // Update the body index data.
   // Do the coordinate mapping here...

It’s time to use the coordinate mapper now. The coordinate mapper will map the depth values to the _colorPoints array:

_coordinateMapper.MapDepthFrameToColorSpace(_depthData, _colorPoints);

That’s it! The mapping has been done. What we have to do is specify which pixels belong to human bodies and add them to the _displayPixels array. So, we loop through the depth values and update the _displayPixels array accordingly.

for (int y = 0; y < depthHeight; ++y)
    for (int x = 0; x < depthWidth; ++x)
        int depthIndex = (y * depthWidth) + x;
        byte player = _bodyData[depthIndex];
        // Check whether this pixel belong to a human!!!
        if (player != 0xff)
            ColorSpacePoint colorPoint = _colorPoints[depthIndex];
            int colorX = (int)Math.Floor(colorPoint.X + 0.5);
            int colorY = (int)Math.Floor(colorPoint.Y + 0.5);
            if ((colorX >= 0) && (colorX < colorWidth) && (colorY >= 0) && (colorY < colorHeight))
                int colorIndex = ((colorY * colorWidth) + colorX) * BYTES_PER_PIXEL;
                int displayIndex = depthIndex * BYTES_PER_PIXEL;
                _displayPixels[displayIndex + 0] = _colorData[colorIndex];
                _displayPixels[displayIndex + 1] = _colorData[colorIndex + 1];
                _displayPixels[displayIndex + 2] = _colorData[colorIndex + 2];
                _displayPixels[displayIndex + 3] = 0xff;

This would result in a bitmap with transparent pixels for a background and colored pixels for the human bodies. Finally, here is how the WriteableBitmap is updated:

// Just some Windows bitmap handling...
Marshal.Copy(_displayPixels, 0, _bitmap.BackBuffer, _displayPixels.Length);
_bitmap.AddDirtyRect(new Int32Rect(0, 0, depthWidth, depthHeight));

Back to the XAML code, you can change the background of the Grid (or whatever) element is behind the Image element and have the background of your choice. For example, this code results to the following image:

        <SolidColorBrush Color="Green" />
    <Image Name="camera" />

Kinect 2 background removal (solid color background)

While this code results in a footbal stadium background:

        <ImageBrush ImageSource="/Soccer.jpg" />
    <Image Name="camera" />

Kinect 2 background removal (image background)

Enjoy and share if you like it!

[Tweet “Background removal using Kinect”]

View the complete source code.

PS: Vitruvius

The BackgroundRemovalTool is part of Vitruvius, an open-source library that will speed-up the development of your Kinect projects. Vitruvius supports both version 1 and version 2 sensors, so you can use it for any kind of Kinect project. Download it and give it a try.

Vangos Pterneas

Vangos Pterneas is a software engineer, book author, and award-winning Microsoft Most Valuable Professional (2014-2019). Since 2012, Vangos has been helping Fortune-500 companies and ambitious startups create demanding motion-tracking applications. He's obsessed with analyzing and modeling every aspect of human motion using AI and Maths. Vangos shares his passion by regularly publishing articles and open-source projects to help and inspire fellow developers.


  • Serge says:

    Great job!
    But… :)

    Do you think, is it possible to display a high resolution image in the “depth mask”?
    Best regards

    • Hi Serge. The depth image has a resolution of 512×424. The color HD image is 1920×1080. As a result, you can’t map every depth point to color one.


      • Francesco says:

        Hello, thanks for this useful tutorial. Actually it is possible to map the depth space to color points, and this is how the CoordinateMapping-Basics example from the SDK works. In this case for each color point you get the corresponding depth coordinate, and then you check the bodyIndexData frame at this coordinate for a value 0x0 or 0xff.
        With this approach you have a higher resolution image but you have to scan a 1920×1080 frame rather than a 512×424.

  • Joan says:

    Nice! Do you know if it’s possible to work with actionscript ?

  • rabehi younes says:

    Hello there ;)
    First l would like to say thnx for sharing this awesome tutorial and code.

    Well I’m working on real time 3d reconstitution using kinect sensors and I don’t know where to start.
    I would like to get somerhing like this :

    It’s very important ,plz help me

    thnx in advance :D

    • Hello. What you need to accomplish requires some thorough understanding of the depth stream and depth data manipulation. I suggest you started with Kinect Fusion examples, included with the SDK.

  • Ali says:

    Hi There;

    Thank you so much for those awesome tutorial they really helped understand the basics.

    So I have to obtain the XYZ coordinates of a marker placed infront of the Kinect.
    I have been told that I need to first perform camera calibration.
    Do you know if I am supposed to perform stereo calibration first?
    I understand I have to get the intrinsic and extrinsic parameters of the camera but I am not too sure how to perform the stereo calibration.

    Also, how do you suggest that I get the XYZ coordinates of a marker placed infront of the Kinect.
    I am looking at the Coordinate Mapping example but you have been trying to align the skeleton.

    What should I do for just a point?

    Sorry if my questions are confusing, I sort of confused myself; all this is new to me

    • Hi Ali. Could you elaborate on your use-case scenario? What are you trying to accomplish? Do you want to detect an object other than the human body?

      • Ali says:


        Sorry if my question was confusing.

        So what I am trying to do is to get the real world coordinates (XYZ) of a marker (a blob) which is in front of the camera and no I am not dealing with skeleton tracking.
        So my question is:
        (1) the depth data I am getting does it only contain the Z distance (i.e. how much infront of the camera the blob is?) or does it give me the (XYZ) position of that point in space with respect to the camera? If not how can I get the XYZ coordinate of that point in space

        N.B I am using the Kinect to only record what I want and the processing (blob detection) is being done on MATLAB

        I hope I am much clearer now. Thanks again!

        P.S. when is your book coming out?

  • Jo says:

    I am working with depth frame because it meet the requirements of my application and also I have to detect the human body. Since the body index frame has same size of the depth frame, is it possible to find the x, y, z of body parts from Body Index frame. I am C++ programmer. Is your new book also include C++?

  • MoJo says:

    Hi Vangos
    Thanks for posting about Kinect v2. Great stuff!
    I am using the background removal code and would like to align an image over the Head joint but it’s always off (a bit to the right). It works perfectly without the background removal code.
    Joint headJoint = body.Joints[JointType.Head];
    if (headJoint.TrackingState == TrackingState.Tracked)
    ColorSpacePoint csp = sensor.CoordinateMapper.MapCameraPointToColorSpace(headJoint.Position);
    Canvas.SetLeft(image, csp.X – (image.Width/2));
    Canvas.SetTop(image, csp.Y – (image.Height/2));

    I’ve tried using MapCameraPointToDepthSpace and that puts it way off to the top/left.
    Any help would be appreciated, thanks!

  • Steve says:

    Hi Vangos,

    nice work your background removal app.

    I’m using the fusion framework as in the SDK example app to create 3D surface using STL format. Additionally I would like to add the feature that if the user enters a specific color range, every object or more precisely point/pixel not corresponding to that color range will be omitted in the mesh (stl) export. (Some kind of color segmentation feature)

    My approach is to append your code to the end of the fusion pipeline but just before the image is rendered and displayed. Same for the stl export method. So from your point of view what is the best way to integrate the code into the fusion sample with respect to the desired feature?

    Best regards


    • Hi Steve,

      You’ll need to avoid refreshing the depth and body index data. The background removal and the 3D functionality should be calculated using the same depth/body-index values. Otherwise, the refresh methods will be called more than once, resulting in much higher overload.


  • Dhaval says:

    Hi Vangos,

    I am following the same tutorial as your direction.
    In that there is no error but does’n give any output to the screen.

    Any Idea??

    • Hi Dhaval,

      Are you standing far enough from the sensor? Could you add a breakpoint to the FrameArrived method and tell me if it’s hit?

      • Dhaval says:

        By keeping the breakpoint On FramArrived Method it gives me TypeLoadException on the following line
        “_sensor = KinectSensor.GetDefault();”

        And the Exception contains the following information:
        “An exception of type ‘System.TypeLoadException’ occurred in KinectBGRemoval.exe but was not handled in user code
        Additional information: Requested Windows Runtime type ‘WindowsPreview.Kinect.KinectSensor’ is not registered.”

        • Hi again Dvahal. Seems that you are using WinRT. So, open the Package.appxmanifest file, select the Capabilities tab and check the Microphone and Webcam. WinRT requires all apps that use an external camera to register these capabilities.

          Let me know if that solved your problem.

          • Dhaval says:

            Hi Vangos,

            Yes I am using WinRT.
            And I done the thing which you told me to do.
            But it gives me the same Exception.
            No change. :(
            What is the issue???

          • Hey Dhaval,

            Are you running Windows 8, Windows 8.1, or Windows 10? If you are running Windows 10, you may need to check this guide.

            Also: Try to install the source code of any project included in the SDK Browser (Windows Store tab). Are you getting the same error?

  • Amna says:

    Hello Vangos,
    Thank you so much for the nice tutorial. I want to ask that I have a data which is recorded using Kinect v1.X. I have only depth and RGB images. Is it possible I can subtract the background from that sequence of images?

    Thank you

    • Hi Amna. If you have the depth and RGB images as bitmaps, you won’t be able to use the same technique to subtract the background. This techniques works real-time, using the RGB, depth and body-index data. Moreover, it is strongly relying on CoordinateMapper. You could possibly use OpenCV or other “traditional” software.

  • Zeus says:

    Hi Vangos,
    I am currently write a program which using hand gesture to control the devices. After the long time research and trying programming, i only able detect the hand gesture together with body, for example :” kinectPointerPoint.Properties.HandType == bodyHandPair.HandType”.
    But for my project required able to detect hand only, which means it can detects the hand gesture even the body is not in front of kinect. I was tried several ways to do it but unfortunately it’s still unable to work. Do you mind give me some suggestion or guide me how should i write the code for detect hand only? Thanks you very much and nice days.

    Best Regards

    • Hi Zeus. Kinect requires you to have at least one person tracked. A tracked person could be visible from head to feet or from head to waist. You won’t be able to detect hand gestures when e.g. sitting at a distance of 20cm. You can detect the hand, but only if Kinect has tracked a body. Vitruvius contains a lot of pre-defined gestures to get started with.


  • Tevfik AKKUS says:

    Hi Pterneas,
    I want to measure dimension (Lenght, Width and depth) of a carton box bottom of kinect V2 sensor. And calculet the volume of the box multiplying lenght*widht*depth as cm. I want to measure Length and witdh using color stream of kinect V2. because resulation is high of color stream. I want measure depth using dept stream . Can you hel me for this Project. Backround will be black conveyor belt. Kinect V2 sensor mounted 120 cm abowe conveyor belt. I know it is easy for you but It is hard for me.
    Tevfik AKKUS

  • Sherry says:

    Hi Pterneas,
    How to only one person coordinate mapping image ?
    Can you give me a tip for that ? Thanks.

  • Akhil Premraj says:

    Can someone tell me how I can get this to Unity?

  • George Alain says:

    Hi Again,
    How can we do the same but in windows RT (store application)
    As it gives errors when using the bitmap.lock and the other bitmap functions

  • Peter says:


    any idea how can we edit the code to support more than one body?
    at the moment it only shows one person
    Thanks in advance

  • George says:

    Hi again, I have tried to add some kind on masks on this code, something just same as the one you did in “” face basics when you draw some sllipse on the eyes, nose and mouth
    however, ther is always some kind of problem, the drawn ellipses are always shown like 2 cm from the face, it only works well if I am facing the middle of the sensor, otherwise they are always nit exactly on the eyes, nose and mouth.
    how can I solve this?
    tried using the coordinate mapper but no luck. or I did it wrong.

  • Hello Vangos,

    Very useful utility.
    Can you suggest how can i restrict the distance to capture users. Say I want to only show the users standing with 2.5 meters from kinect on green screen.

    Thanks in Advance,

    • Hi Munish. That’s pretty easy to do. You simply get the Z value of the SpineBase joint:

      double distance = body.Joints[JointType.SpineBase].Position.Z;

      The result is measured in meters, so you can just compare that distance to whatever number you like :)

  • Javier says:

    Thanks for such a work! I see in your example how background dissappear while the girl stays in camera. What happen is there is an object, like a table, between kinect and the person in front of, would disappear also the part of the person behind the table? Because I would like ti achieve that effect with the script you uploaded ir :).
    Thanks in Advance!

  • Hoang Anh says:

    Hi Vangos
    I’m working on a background remove app with my Kinect 2, I need to add some image processing in my project to increase accuracy and reduce noise. I tried Emgu cv and successfully get rid off some noise, but the image lose a little bit on the top of the head. Can you suggest what libary / method should I use to extract full body?
    Thanks in advance

  • Thanks for sharing this nice article.It have some great use full informations.

  • I read your blog. I like your blog.

  • Dhaval says:

    Hi Vangos,

    In this Background Removal application why the shoes are not clearly visible?
    Any idea on this.

    I tried to google it also but didn’t find anythimg.

    Can you please help with this?


    Waiting for your quick reply.

  • Karla says:

    Hello Vangos, thank you very much for the tutorials of your amazing work. I can’t wait for your book! :-)
    I would like to use this to train some human body shapes, so I was wondering how can I obtain the xyz shape coordinates of the extracted body contour in your algorithm. I would really appreciate it if you can help me with any ideas.
    Thanks in advance!

    • Hi Karla. Thanks for your nice comment. You can use the BodyIndexFrame class to get information about whether a specific point belongs to a user or not. Let’s suppose that one player is tracked by the sensor. The BodyIndex frame would be an array of 0s and 1s. 0 means that the specific point does not belong to a player. 1 means that the specific point belongs to the player. The frame would be something like this:


      So, in your algorithm, you just need to identify the neighbors of each 1 point.

      • Karla says:

        Hi Vangos! I’ve been trying to obtain the BodyIndexFrame data you were talking about… Analyzing the code, I thought that the _bodyData array on this line:
        was the one with the information I need, but I am observing the array with a breakpoint and QuickWatch and all I see is 255 value corresponding to depth pixels that do not belong to a body, although there is someone in front of the sensor. I made sure that the person in front of the Kinect was detected in a normal run and then, I told this person to keep that position in order to be detected in the debug run, but still there are no changes on the array… keeps showing 255’s on all its values.
        I wonder if I am looking on the wrong array or if it’s a debugging thing…
        Thank you very much for your guidance and support!
        Best regards.

        • Hi Karla. Try to use System.Diagnostics.Debug.WriteLine(_bodyData[i]);

          This way, you can check the values real-time without breakpoints.

          Let me know what you find.

          • Karla says:

            Hi Vangos. The “i” variable on that line, should be “depthIndex”?

            I did that and the output gives something like this:
            … (the same number until I get out from visible range)
            Although, sometimes it is only 4’s, 5’s or even 1’s. I understand it shows the decimal output, what I don’t really get is why there is a unique value in the whole batch. Sorry if I’m causing you trouble, but I would like to completely understand it.

            Thank you so much for your time.

            Best regards!

          • Hi Karla. Are you getting any 0s? Or 0xff?

          • Karla says:

            Hi Vangos, nope! When there’s no subject in front of the Kinect sensor, the Output window doesn’t show anything. It displays these numbers (1,3,4,5) when someone is detected and stops printing when this person gets out of visible range, but no 0’s nor 255’s.

          • That’s odd. What if you run the CoordinateMapping-Basics example from the SDK Browser? Could you try this example and tell me what’s the result?

          • Karla says:

            The example from the SDK Browser does well, like your Background Removal example!

            What is weird is that whenever I try to read what happens in that matrix something goes wrong… I moved the System.Diagnosis line to another point in the code and now it shows 255 forever, doesn’t matter if I’m in front of the Kinect or not.

          • Karla says:

            Hey Vangos! AT LAST xD

            I made a bit conversion in Console.Writeline and now I am able to see 0xFFs if the Kinect gets no detection/tracking, but when I’m in front of the sensor 0x00s get through!! So, this means FFs are the background and 00s are the specific points that belong to a player in the frame, right?

            Thank you very much for your patience!

          • Hi Karla. I’m glad that you spotted the error. The values of the BodyIndexFrame are, indeed, in binary. You are right about the numbers:
            0xff —> No player
            0x00 —> Player exists

          • Karla says:

            Hi Vangos,

            Thanks again for all your help!

            It means a lot to us -as a community of users- that you are always willing to provide assistance, I personally appreciate your concern in resolving any query until the very end :-)

            Best regards,

        • I’m glad to hear that :)

          Hope everyone enjoys Kinect development as much as I do. Keep up the good work.

  • I read your blog.interested blog such as great information shared by you.Thanks.

  • Lou-Ann Raymond says:


    I might have a stupid question but how did you get the video? I am trying to record a video of my color image with the tracking points.

    thanks in advance

  • Dave says:

    Hi Vagnos, thank you for great tutorial. Could you please explain me how can I map greenscreen image to 1920×1080? Bacause I would like to make the body for example green (with small change in your tool) and show it on color camera, but it’s hard to align the frames.
    Thank you in advance and once again for great post :)

    • Hello Dave. Here’s the process you can follow:
      1) Loop through the color points.
      2) Map each color point to the depth space.
      3) Check if the mapped point has a value of 1 in the body-index frame.
      4) If the body-index frame indicates the point belongs to the player, paint the color point with a green value.

  • I read your article you shared really good knowledge with is very useful foe me.Thanks for sharing this nice post.

  • Hally Zambrano says:

    Hi vangos,

    I read most of your article. Its really great. it helps me a lot. I just want to ask something, this for unity. I already got the point cloud and represent it using cube, i want to get the color texture of the specific point to apply to the can i do that.? Hope you can help me with this..thank you very much.

  • Hally Zambrano says:

    thanks vangos,..

    your amazing…

  • Huang Qian says:

    Hello Vangoe. Recently I was mad a hand segment program. Fist, I should remove the background and the body from my image. So I used the bodyindex and depth threshold. Next, I used a map function to got the RGB value of hand.
    ColorSpacePoint colorPoint = _colorPoints[depthIndex];
    int colorX = (int)Math.Floor(colorPoint.X + 0.5);
    int colorY = (int)Math.Floor(colorPoint.Y + 0.5);
    int colorIndex = ((colorY * colorWidth) + colorX) * Constants.BYTES_PER_PIXEL;
    I want to know the coloeX and the colorY is the number of pixels or the exactly distance of hand. And why the Constants.BYTES_PER_PIXEL is (PixelFormats.Bgr32.BitsPerPixel + 7) / 8? I don’t know the meaning.
    C an you help me?

    • Hello Huang. ColorX and ColorY are the coordinates within the 1920×1080 array. So, they are measured in pixels.

      BYTES_PER_PIXEL always 4, since we are using RGBA: 1 byte for Red, 1 byte for Green, 1 byte for Blue, and 1 byte for Alpha transparency.

  • madonna sabry says:

    hi vangos
    first of all these tutorials is the best ever
    i want to do the same but for hands not the whole body

  • Kashyap Chetan Kotak says:

    Is it possible if I provide a video shot from mobile as input?

  • CHOEUNSOL says:

    hi!! tankyou for your help!!! It is very good
    But I do not know.
    I want to mark right hand and left hand position in this source
    how to map the hand position over the image on the screen?

  • xiaoning says:

    hi vangos
    I would like to ask why black hair is not fully displayed?

  • Arif says:

    Hi Vangos,

    Will it be possible to have a screenshot or photo button on this green screen program?
    Do you have any references that i can follow?
    Thank you for this tutorial.

  • Winston says:


    I was wondering how I can restrict the distance to less than 3 meters so the removal only works for one person? I normally use double distance = body.Joints[JointType.SpineBase].Position.Z; to track distance, but I am unable to seperate players in this example. do you have any suggestions?

    • Hello Winston. You’ll need to manually loop through the byte[] array of the BodyIndex frame. Points that do not belong to a body have a value of zero. Points that belong to player 1 will have a different value than the points that belong to player 2. This way, you’ll be able to distinguish between the players.

      • Winston says:

        Thank you for your reply. I have been using the student edition of your Vitruvius software for some time. When looping through the byte[] array, I receive player vales of 1-3. Is there any way to tell the distance of player 1 vs player 3? Or compare them to a body tracking id?

        • There is no direct correlation between the two streams. You could do the following as a workaround, though:

          You could map the 3D coordinates of the SpineBase joint to the 2D depth frame. You could then use the X and Y values to detect the actual index of the point. Since the Depth and BodyIndex frames have the same size, the Depth index would be the BodyIndex value you want.

  • I see you don’t monetize your website, don’t waste your traffic,
    you can earn extra cash every month. You can use the best adsense alternative for any type of website (they approve all websites), for more info
    simply search in gooogle: boorfe’s tips monetize your website

  • Mykel Kristoffer Divino says:

    Hi, How were you able to solve the undetectable (black)hair problem? Thanks

  • Chris Dunn says:

    Trying to only show certain pixels in frame. Image seems to flicker badly when togglerawmap is true. I put this before using depthfilter. Using your Lightbuzz SDK depth image example. Is there a better way?
    for (int i = 0; i minDepth && frame.DepthData[i] < maxDepth)
    frame.DepthData[i] = 8000;
    if (toggleRawMap)
    frame.DepthData[i] = 0;

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.