Skip to main content

Back in 2014, I had the opportunity to work with an amazing agency from Madrid regarding the Mundobasket Contest, held in Spain. My job was to develop a first-class Augmented Reality experience for Globalzepp and Liga Endesa, the official Mundobasket sponsor. The application was placed in Plaza del Callao and more than 30,000 people had fun using it.

The concept was pretty simple: people stood in front of a Kinect sensor and played with a spinning ball, just like pros! The app recorded their movements and projected the result on a huge video wall. Back then, the app was developed using Kinect v2, C#, and XAML.

Today, I’ll show you how to develop a similar experience using Kinect and Unity, step-by-step! By my side today is Mike Miropoulos, Unity Developer @LightBuzz, who’ll guide us through the bits and bytes of Unity. That’s the demo we are going to implement together:

Prerequisites

Video & Source Code

As usual, I’m providing you with the complete source code, along with a demo video.

Building the User Interface

We are building our User Interface using the powerful Unity Editor. Launch Unity, create a new project, and specify its location. Then, you should see the Editor window.

Step 0 – Add the required plugins

To add Kinect functionality, we need to add the required Unity plugins. Microsoft provides some Unity packages you can use, however, you’ll need to import a lot of source-code files into your project. To avoid importing all these source code files, Michael and I created an ordinary C# .dll file. This .dll file can imported into Unity’s Plugins directory. It’s named Windows.Kinect.dll.

You can download the complete Plugins folder from GitHub. No need to import any source-code files. Import the .dll, just like you’d do with a WPF or WinRT app.

Step 1 – Configure the Main Camera

Since the plugins are in place, we can now add the visual elements into our scene. Every Unity scene has its Main Camera and Directional Light elements. We’ll need the following objects, too:

  • A Quad – used to display the Color frames.
  • An FBX 3D model of the ball.
  • A .cs script file that will manage the whole functionality.

First of all, you’ll need to configure your Main Camera. The Camera should be positioned in front of the Color view. So, set its Position property to [0, 0, -5]. You can also reset the Background color property to black.

Kinect Basketball Unity 1

Step 2 – Add the Color View

Then, select Create → 3D object → Quad. This will add a rectangle to our scene. This rectangle has a Texture that will be updated every time we have a new Color frame available.

Position your Quad further from the camera [0, 0, 0]. Set its Scale to [16, 9, 1] (since the resolution is 16/9). Also, set its Z rotation to 180. Why? Because Kinect SDK gives the frames reverted. So, we need to rotate our Quad to display them properly.

Kinect Basketball Unity 2

Step 3 – Add the Ball

We’ll now add the ball to our scene. The ball is an .fbx 3D model file. Simply import the model to your project and then dang-n-drop it within the scene. Set its initial position to [0, 0, 0].

Kinect Basketball Unity 3

Step 4 – Add the Script

Finally, we need to add a C# script file that will manage the logic behind the spinning ball. Name the script “BasketballSpinnerSample.cs” and move to the next step.

Kinect Basketball Unity 4

Adding functionality

Our user interface is now set, so let’s open the script file and add the desired functionality. First, you’ll need to add a reference to the Windows.Kinect namespace.

using Windows.Kinect;

Step 1 – Members

If you are following this blog for a while, you already know the process. We need to add references to the KinectSensor and the frame readers. We need a color reader and a body reader. We also need a Texture and a byte array that will be used to display the color data. Finally, we need to add a reference to the visual elements (GameObjects). The scale and speed parameters represent the size and the rotation frequency of the ball, respectively.

// Kinect members.
private KinectSensor sensor;
private ColorFrameReader colorReader;
private BodyFrameReader bodyReader;
private Body[] bodies;
    
// Color frame display.
private Texture2D texture;
private byte[] pixels;
private int width;
private int height;
// Visual elements.
public GameObject quad;
public GameObject ball;
// Parameters
public float scale = 2f;
public float speed = 10f;

Step 2 – Initialization

Let’s move to our Start method. The Start method is called by Unity when the current script is enabled. We’ll use the Start method to initialize the sensor, the array of bodies, and the texture.

void Start()
{
    sensor = KinectSensor.GetDefault();
    if (sensor != null)
    {
        // Initialize readers.
        bodyReader = sensor.BodyFrameSource.OpenReader();
        colorReader = sensor.ColorFrameSource.OpenReader();
        // Body frame data.
        bodies = new Body[sensor.BodyFrameSource.BodyCount];
        // Color frame data.
        width = sensor.ColorFrameSource.FrameDescription.Width;
        height = sensor.ColorFrameSource.FrameDescription.Height;
        pixels = new byte[width * height * 4];
        texture = new Texture2D(width, height, TextureFormat.RGBA32, false);
        // Assign the texture to the proper game object. Also, flip the texture vertically (Kinect bug).
        quad.GetComponent<Renderer>().sharedMaterial.mainTexture = texture;
        quad.GetComponent<Renderer>().sharedMaterial.SetTextureScale("_MainTex", new Vector2(-1, 1));
        sensor.Open();
    }
}

Step 3 – Display Color

It’s time to move to the Update method. The Update method is called by Unity for every new frame (60 times per second). Keep in mind that Unity frames are not the same to Kinect frames. Kinect provides up to 30 frames per second. As a result, we have to check whether the color and body readers have new data available before handling the Kinect frames.

To refresh the texture, we need to copy the raw Color data to the byte array we declared and then load the byte array to the texture:

if (colorReader != null)
{
    using (var frame = colorReader.AcquireLatestFrame())
    {
        if (frame != null)
        {
            frame.CopyConvertedFrameDataToArray(pixels, ColorImageFormat.Rgba);
            texture.LoadRawTextureData(pixels);
            texture.Apply();
        }
    }
}

Step 4 – Detect the hands

In a similar way, we check for Body frames, too. Within the loop, we check for new Body data, update the bodies array and detect the hands of the first tracked body. To pick the closest hand, we simply compare the Z values of the Position property.

if (bodyReader != null)
{
    using (var frame = bodyReader.AcquireLatestFrame())
    {
        if (frame != null)
        {
            frame.GetAndRefreshBodyData(bodies);
            var body = bodies.Where(b => b.IsTracked).FirstOrDefault();
            if (body != null)
            {
                // Detect the hand (left or right) that is closest to the sensor.
                var handTipRight = body.Joints[JointType.HandTipRight].Position;
                var handTipLeft = body.Joints[JointType.HandTipLeft].Position;
                var closer = handTipRight.Z < handTipLeft.Z ? handTipRight : handTipLeft;
            }
        }
    }
}

Step 5 – Map the coordinates to Unity space

In the 3D space, Kinect joint positions are expressed in meters. However, the ball needs to follow the position of the hand, which is displayed in a 2D  texture (1920×1080). CoordinateMapper is a utility that allows us to convert 3D coordinates to 2D points, and vice-versa.

After converting the 3D point to the 2D space, we need to perform an additional mapping: converting the 2D point to Unity coordinates! Thankfully, that is pretty easy to accomplish:

// Map the 3D position of the hand to the 2D color frame (1920x1080).
var point = sensor.CoordinateMapper.MapCameraPointToColorSpace(closer);
var position = new Vector2(0f, 0f);
                        
if (!float.IsInfinity(point.X) && !float.IsInfinity(point.Y))
{
    position.x = point.X;
    position.y = point.Y;
}
// Map the 2D position to the Unity space.
var world = Camera.main.ViewportToWorldPoint(new Vector3(position.x / width, position.y / height, 0f));
var center = quad.GetComponent<Renderer>().bounds.center;

After typing that piece of code, the ball would follow the position of the closest hand.

Step 6 – Rotate the ball

Two things remain to be done:

  • Scaling the ball when the player is moving closer or further from the camera.
  • Rotating the ball.

Unity and simple Vector Maths can do the job in 3 lines of code:

// Move and rotate the ball.
ball.transform.localScale = new Vector3(scale, scale, scale) / closer.Z;
ball.transform.position = new Vector3(world.x - 0.5f - center.x, -world.y + 0.5f, -1f);
ball.transform.Rotate(0f, speed, 0f);

Step 7 – Dispose

Finally, remember to release any resources and dispose the sensor object when exiting the application. Unity automatically calls the OnApplicationQuit method whenever you click the Close button:

void OnApplicationQuit()
{
    if (bodyReader != null)
    {
        bodyReader.Dispose();
    }
    if (colorReader != null)
    {
        colorReader.Dispose();
    }
    if (sensor != null && sensor.IsOpen)
    {
        sensor.Close();
    }
}

Conclusion

So, this is it, folks! Hope you enjoyed this tutorial. Feel free to extend the source code, add your own effects, constraints, and functionality, and use it in your projects.

‘Till the next time, keep Kinecting!

PS: Vitruvius

This project is part of the Premium version of Vitruvius (original cost: $299). Vitruvius is a set of powerful Kinect extensions that will help you build stunning Kinect apps in minutes. Vitruvius includes avateering, HD Face, background removal, angle calculations, and more. Check it now.

Want to hire my team for your next cutting-edge app? Drop me an email.

Vangos Pterneas

Vangos Pterneas is a software engineer, book author, and award-winning Microsoft Most Valuable Professional (2014-2019). Since 2012, Vangos has been helping Fortune-500 companies and ambitious startups create demanding motion-tracking applications. He's obsessed with analyzing and modeling every aspect of human motion using AI and Maths. Vangos shares his passion by regularly publishing articles and open-source projects to help and inspire fellow developers.

4 Comments

  • Ajay Rathore says:

    Hello Mr Pterneas,

    I’ve been following your blog and tutorials for a while and I must say you and your tutorials have been exceptionally good than other Youtube channels and tutorials.
    I’d like to ask you if it is possible to rotate the hologram in HoloLens with the help from Kinect when an object in real world is rotated?

    Just like this Basketball tutorial, I’d like to know if it is possible to take a real basketball in my hand and the hologram from hololens overlaps the ball and rotates exactly when I rotate the ball?

    It’d be very helpful for my project if you can guide me.

    Thank you.

    Regards,
    Ajay Rathore

    • Hi Ajay. Thank you very much for your comment. You can do this. There are 2 reference systems:

      The Kinect reference system has it’s [0,0,0] point to the Kinect sensor. The HoloLens reference system has its [0,0,0] point to the person wearing the headset.

      So, to position the ball, you need to find the distance between the person and the hands. This can be accomplished using Kinect. You’ll then assume the segment between the body and the hands has its [0,0,0] point to the human body.

      • Ajay Rathore says:

        Thank you for replying. I will start developing like you explained. I hope it will work.
        Cheers!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.