A few months ago, I had published a detailed guide about converting between Kinect’s 3D and 2D coordinates. Today, I’m going to show you how to accomplish Coordinate Mapping with minimum effort and only one line of C# code!
Background
Kinect understands the 3D space using an infrared sensor and a depth processor. The depth processor can measure the distance between the physical points and the device. Kinect SDK uses a structure named CameraSpacePoint to represent a point in the physical 3D space. A CameraSpacePoint is a set of three properties [X, Y, Z]. X is the distance in the horizontal axis, Y is the distance in the vertical axis, and Z is the distance (depth) between the point and the plane of the sensor.
The values are measured in meters. So, the CameraSpacePoint [1.5, 2.0, 4.7] is located 1.5 meters from the left, 2.0 meters from the top, and 4.7 meters from the sensor.
1 meter = 3.28 feet
1 meter = 39.37 inches
This concept is illustrated in the figure below:
However, when we develop a Kinect app, we use computer monitors. Somehow, we have to project the 3D points on the 2D screen space. There are 2 screen-spaces:
- Color Space: 1920×1080 pixels
- Depth/Infrared Space: 512×424 pixels
Obviously, points in the 2D space only have X and Y values, measured in pixels.
So, we have to convert meters to pixels! How’s that possible? I have thoroughly explained this process in my blog post Understanding Kinect Coordinate Mapping.
Coordinate Mapping is the native process of converting between the 3D and the 2D space.
3D space
Using Vitruvius, Coordinate Mapping is as simple as typing one line of C# code. Let’s have a look at an example:
var position = body.Joints[JointType.Head].Position;
This is how we find the position of the Head joint using the official Microsoft SDK. The point3D variable is a [X, Y, Z] combination. It indicates where the head of the person is located.
Projecting the 3D point to the 2D space is accomplished using Vitruvius’ ToPoint method. That method takes a Visualization enumeration as a parameter. To use the ToPoint method, you first need to import Vitruvius in your project:
using LightBuzz.Vitruvius;
2D Color Space (1920×1080)
This is how to convert the 3D point to a 2D point in the 1920×1080 Color Space:
var pointColor = position.ToPoint(Visualization.Color);
var left = pointColor.X;
var top = pointColor.Y;
2D Depth Space (512×424)
Similarly, you can convert the 3D point to a 2D point in the 512×424 Depth Space:
var pointDepth = position.ToPoint(Visualization.Depth);
var left = pointDepth.X;
var top = pointDepth.Y;
2D Infrared Space (512×424)
Converting to the Infrared Space is identical to the Depth Space:
var pointInfrared = position.ToPoint(Visualization.Infrared);
var left = pointInfrared.X;
var top = pointInfrared.Y;
If you are using Unity, there is one additional extension method that converts a 2D point to a 2D Vector:
var vector = position.ToPoint(Visualization.Color).ToVector();
Using a different Coordinate Mapper
In case you are using multiple Kinect sensors, you can still work with Vitruvius! You simply have to specify which sensor the method should use:
var pointColor = position.ToPoint(Visualization.Color, sensor1.CoordinateMapper);
var pointDepth = position.ToPoint(Visualization.Depth, sensor2.CoordinateMapper);
var pointInfrared = position.ToPoint(Visualization.Infrared, sensor3.CoordinateMapper);
This is it. You can now project any 3D point to any 2D space!
Actually, Vitruvius extension methods could be applied to any point, not just body points. For example, you can specify your own 3D point, like below:
var point3D = new CameraSpacePoint
{
X = 0.8f,
Y = 1.4f,
Z = 3.2f
};
var point2D = point3D.ToPoint(Visualization.Color);
Be cautious, though: not every 3D point corresponds to a 2D point! Why? Because there may be nothing in that position. In our example, if there is nothing in 3.2 meters from the sensor, a dummy point will be generated. To avoid any confusion, remember to check whether the 2D points have valid X and Y values, like below:
if (!float.IsInfinity(point2D.X) && !float.IsInfinity(point2D.Y))
{
// Do your magic.
}
// Otherwise, it's not a valid point.
The ToPoint method is a powerful weapon that will save you a ton of time.
You can access it by downloading Vitruvius.
Download Vitruvius‘Till the next time, keep Kinecting!
hi,how to make a trajectory path from the 3D points(x,y,z).
like for hand movement from one point(x,y,z) to another point(x,y,z);
Hello. You can add a point object (e.g. Ellipse) in your XAML canvas every time you have a new (X, Y, Z) value. This Ellipse should not be cleared when the canvas is updated. This way, you’ll be able to form a trace.
below is my code for ellipse,but it is moving,not making a tracing path
Joint HandLeft = body.Joints[JointType.HandLeft];
DepthSpacePoint HandLeftPosition = sensor.CoordinateMapper.MapCameraPointToDepthSpace(HandLeft.Position);
Ellipse eps = new Ellipse()
{
Width = 10,
Height = 10,
Fill = new SolidColorBrush(Color.FromArgb(255, 0, 255, 0))
};
//canvas add to xaml mycanvas
mycanvas.Children.Add(eps);
Canvas.SetLeft(eps, HandLeftPosition.X);
Canvas.SetTop(eps, HandLeftPosition.Y);
Canvas.SetZIndex(eps, 100);
Are you getting any errors?
thanks i got it, every time i was claering my canvas so got the trace by
//bodyCanvas.Children.Clear();
can we store these data in xml or sqlite
Yes, you can. Store the data locally and after you finish recording, then connect to the database or XML file. Do not connect to a database or XML file while recording. Save frame-by-frame (e.g. in txt format) and load after you’ve finished.
Hi,
I need to calibrate depthtracking. Currently I’m only using the KinectSDK (v2). Here’s my current setup:
I open the multiframesourcereader and use EmguCV and the colorframe to detect my canvas (projected single color on a wall). Based on the output I can create a Rectangle (x, y, width, height) that corresponds with the boundaries in the 1080p. My goal is that I only want to measure depth within that canvas, using the depthframereader (with a lower resolution and different aspect ratio).
I’m trying to achieve this using the coordinatemapper, but my understanding of its intricacies is lacking and I’m not succesful.
I would like to ask you if my goal can be achieved with the vitruvius library or if you otherwise have suggestions using only the kinect SDK.
In my FrameArrived event I have the following so far:
– _coordinateMapper.MapColorFrameToCameraSpaceUsingIntPtr(depthFrameData.UnderlyingBuffer, depthFrameData.Size, _cameraPoints) (with _cameraPoints = new CameraSpacePoint[colorFrameDescription.Width, colorFrameDescription.Height]
– Later on in the code (and it’s a snippet I got from StackOverflow):
for(var y=0;y<depthHeight;y++){
for(var x = 0;x < depthWidth;x++ {
var depthIndex = y * depthWidth + x;
var cameraPoint = _cameraPoints[depthIndex];
var depthXY = new Point(x, y);
var cameraXY = new Point(cameraPoint.X, cameraPoint.Y)
}
}
Am I on the right path to map a rectangle measured in the color to restrict the boundaries in the depthframe, or am I way off? I'm quite new to the world of Kinect. Thank you in advance for reading and I hope you can help me along
Hi,
I need to capture 3D Color space information of all the joints and convert it to equivalent 2D and record these values into txt file for each frame captured. How do i do it?
You can use the CoordinateMapper method MapColorFrameToCameraSpace.
Hey, How can I record skeleton joint coordinates for one minute of duration (when moving the body parts) in a .txt or .csv file?
Hi Isuri. You can check this open-source project and modify it to fit your needs (e.g., by adding a timer).