NOTE: This is preliminary software and/or hardware and APIs are preliminary and subject to change.
Well, I have been lucky enough since Microsoft selected me for early access to the new Kinect for Windows version 2 sensor. Today, I want to share some facts and figures regarding the new device.
The new sensor features a radically different hardware design. First thing to notice is that the tilt motor is now gone. However, the new cameras provide a wider field of view and feature frames of higher resolution. Above, you can see my Developer Preview unit, unboxed. Below, you can watch a quick video I made, demonstrating the new color, depth, infrared and body streams.
And here are the differences between the old and the new sensor.
|Version 1||Version 2|
|Depth range||0.4m → 4.0m||0.4m → 4.5m|
|Audio stream||4-mic array||4-mic array|
The new API is quite similar to the old one, however, the projects developed using Kinect SDK v1 will not run as-is on the new sensor. Some class names and methodologies have been changed, so a plain recompile won’t do the job. Do not worry though. Throughout the upcoming series of blog posts, I will try to explain every difference and guide you step-by-step on creating your Kinect v2 projects.
So, here are my highlights:
Better camera streams
Kinect combines a color, a depth and an infrared camera at an affordable price. Version 1 had low specification cameras, even though the results were stunning. The second version of Kinect doubles the color camera resolution and almost-doubles the depth camera resolution. The data are processed faster and the accuracy is radically increased.
Yeah, the new sensor tracks up to 25 body joints, along with their corresponding orientations. There are some joint renames as well.
That’s right! Except from joint tracking, the new sensor lets us determine the state of the users’ hands. The state is just an enumeration with values of “Open”, “Closed”, “Lasso”, “Unknown” and “NotTracked”. This is really useful for creating smoother natural user interfaces. For example, we can now determine that the user is performing a wave gesture when the hand joint is moving AND the hand state is “Open”. If you need more advanced finger tracking, consider Candescent NUI.
Kinect for Windows version 1 could track 40 points of the human face. Kinect for Windows version 2 goes one step further and can even recognize some very basic facial expressions, activities and accessories! Here are supported facial data:
- Eyes closed
- Eyes looking away
- Mouth open
- Mouth moved
- Glasses accessory
- Happy expression
- Neutral expression
Potential? Unlimited! Personally, I control the lights of my house using gestures in front of my Kinect sensor. Now, I can just blink my eye and the living room spots turn on! (More on this at a later blog post)
Let me clarify that I am not paid by Microsoft, though Kinect for Windows version 2 is my gadget of choice for 2014. Everything has been dramatically improved and new features will be popping all the time. Now, the only limit of software is your imagination.
If you enjoyed this post, consider checking Vitruvius. Vitruvius is a set of powerful Kinect extensions that will help you build stunning Kinect apps in minutes. Vitruvius includes avateering, HD Face, background removal, angle calculations, and more. Check it now.