What if your computer could detect your eyes, nose, and mouth? What if an application could understand your facial expressions? What if you could build such applications with minimal effort? Until today, if you wanted to create an accurate mechanism for annotating real-time facial characteristics, you should play with OpenCV and spend a ton of time experimenting with various algorithms and advanced machine vision concepts.
Luckily for us, here comes Kinect for Windows version 2 to save the day.
One of the most exciting features of Kinect 2 is the new and drastically improved Face API. Using this new API, we’ll create a simple Windows application that will understand people’s expressions. Watch the following video to see what we’ll develop:
Read on for the tutorial.
Note: Kinect provides two ways to access facial characteristics: Face Basics API and HD Face API. The first one lets us access the most common features, such as the position of the eyes, nose, and mouth, as well as the facial expressions. HD Face, on the other hand, lets us access a richer and more complex collection of facial points. We’ll examine Face Basics in this article and HD Face on the next blog post.
Face Basics features
Here are the main features of the Face Basics API:
- Detection of facial points in the 2D space
- Left & right eyes
- Nose
- Mouth
- Head rectangle
- Detection of expressions
- Happy
- Left/Right eye open
- Left/right eye closed
- Engagement
- Looking away
- Detection of accessories
- Glasses
Prerequisites
To create, build, and run this app, you’ll need the following:
- Kinect for Windows v2 sensor (or Kinect for XBOX v2 sensor with an adapter)
- Kinect for Windows v2 SDK
- Windows 8.1
- A free USB 3 port
Creating a basic Face app in 7 steps
The application we’ll develop in this short tutorial will highlight the face points and will detect whether the user has her eyes and mouth opened or closed. The code applies to both WPF and WinRT.
Step 1: XAML
The XAML code is fairly easy. We simply define a canvas and a few ellipses that will represent thee face points. So, launch Visual Studio, create a new project and modify your XAML like this:
<Viewbox>
<Grid Width="1920" Height="1080">
<Image Name="camera" />
<Canvas Name="canvas">
<Ellipse Name="ellipseEyeLeft" Style="{StaticResource EyeStyle}" />
<Ellipse Name="ellipseEyeRight" Style="{StaticResource EyeStyle}" />
<Ellipse Name="ellipseNose" Style="{StaticResource NoseStyle}" />
<Ellipse Name="ellipseMouth" Style="{StaticResource MouthStyle}" />
</Canvas>
</Grid>
</Viewbox>
The styles of the ellipses are included in the App.xaml file.
Step 2: Add the required references
So far, so good! Now, navigate to the Solution Explorer and right-click the references icon. Select “Add reference”, then select “Windows 8.1”, and then “Extensions”. Check the Microsoft.Kinect and Microsoft.Kinect.Face assemblies. If you are using WinRT, the Microsoft.Kinect assembly is called WindowsPreview.Kinect.
Step 3: Declare the Kinect face objects
After typing the XAML code, open the corresponding C# file (MainWindow.xaml.cs or MainPage.xaml.cs). First, you’ll need to import the Kinect namespaces.
For .NET/WPF, it is:
using Microsoft.Kinect;
For WinRT, it is:
using WindowsPreview.Kinect;
This will provide us with the core Kinect functionality, but no face capabilities. In both WPF and WinRT, the face features are included in the namespace:
using Microsoft.Kinect.Face;
We can now declare the necessary objects. Similarly to the color, depth, infrared and body stream, Kinect also includes a frame source and a face reader class:
// The sensor objects.
KinectSensor _sensor = null;
// The color frame reader is used to display the RGB stream
ColorFrameReader _colorReader = null;
// The body frame reader is used to identify the bodies
BodyFrameReader _bodyReader = null;
// The list of bodies identified by the sensor
IList<Body> _bodies = null;
// The face frame source
FaceFrameSource _faceSource = null;
// The face frame reader
FaceFrameReader _faceReader = null;
Step 4: Initialize Kinect and handle the events
Navigate to the .xaml.cs file and place the following code in the constructor, just below the InitializeComponent method. The _colorReader is used to display the RGB stream (refer to my previous article about the different streams). The body reader is used to acquire the body data. We need the body data, since each face corresponds to a specific body instance.
Initialization of the color and depth readers is straightforward. Initializing the face reader is a little more tricky though. You need to explicitly state which face features you expect. Unless you feed the reader with a number of features, it will give you nothing! For our example, I have specified 9 elements (bounding box in color space, glasses, closed eyes, mouth open, etc.). You can add or remove features from the enumeration FaceFrameFeatures.
Finally, just like the color and body readers, remember to handle the FrameArrived event!
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
_sensor.Open();
_bodies = new Body[_sensor.BodyFrameSource.BodyCount];
_colorReader = _sensor.ColorFrameSource.OpenReader();
_colorReader.FrameArrived += ColorReader_FrameArrived;
_bodyReader = _sensor.BodyFrameSource.OpenReader();
_bodyReader.FrameArrived += BodyReader_FrameArrived;
// Initialize the face source with the desired features
_faceSource = new FaceFrameSource(_sensor, 0, FaceFrameFeatures.BoundingBoxInColorSpace |
FaceFrameFeatures.FaceEngagement |
FaceFrameFeatures.Glasses |
FaceFrameFeatures.Happy |
FaceFrameFeatures.LeftEyeClosed |
FaceFrameFeatures.MouthOpen |
FaceFrameFeatures.PointsInColorSpace |
FaceFrameFeatures.RightEyeClosed);
_faceReader = _faceSource.OpenReader();
_faceReader.FrameArrived += FaceReader_FrameArrived;
}
Step 5: Connect the face with the body
A face object is related to a corresponding body object (obviously). So, the face source should be updated with the tracking ID of the body. The following code detects the default body (if any) and assigns its unique tracking identifier to the face frame source.
void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
frame.GetAndRefreshBodyData(_bodies);
Body body = _bodies.Where(b => b.IsTracked).FirstOrDefault();
if (!_faceSource.IsTrackingIdValid)
{
if (body != null)
{
// Assign a tracking ID to the face source
_faceSource.TrackingId = body.TrackingId;
}
}
}
}
}
You can add additional functionality (such as gesture tracking) if necessary.
Step 6: Inside the Face FrameArrived event handler
Now is time for the best part. Since the face source is connected with the body, we can specify what happens when a face frame is available. Face frames work exactly like the color, depth, infrared, and body frames: firstly, you get a reference to the frame, then you acquire the frame, and, if the frame is not empty, you can grab the FaceFrameResult object. The FaceFrameResult object encapsulates all of the available facial information.
void FaceReader_FrameArrived(object sender, FaceFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
// Get the face frame result
FaceFrameResult result = frame.FaceFrameResult;
if (result != null)
{
// Get the face points, mapped in the color space
var eyeLeft = result.FacePointsInColorSpace[FacePointType.EyeLeft];
var eyeRight = result.FacePointsInColorSpace[FacePointType.EyeRight];
var nose = result.FacePointsInColorSpace[FacePointType.Nose];
var mouthLeft = result.FacePointsInColorSpace[FacePointType.MouthCornerLeft];
var mouthRight = result.FacePointsInColorSpace[FacePointType.MouthCornerRight];
// Get the face characteristics
var eyeLeftClosed = result.FaceProperties[FaceProperty.LeftEyeClosed];
var eyeRightClosed = result.FaceProperties[FaceProperty.RightEyeClosed];
var mouthOpen = result.FaceProperties[FaceProperty.MouthOpen];
}
}
}
}
Although the above code is self-explanatory, there are 2 points of interest:
- FacePointsInColorSpace is a collection of facial points (X, Y values), projected to the 2D color space. Face Basics API does not provide depth (Z) values for the eyes or nose (more on this on the next blog post).
- FaceProperties is a collection of the detected expressions. Each property has a value from the DetectionResult enumeration (Yes, No, Maybe).
Step 7: Drawing the UI
The last step is simple UI drawing. We simply position the ellipses to the X, Y positions of the eyes and nose. The size of the mouth ellipse changes according to the expressions of the user.
// Position the canvas UI elements
Canvas.SetLeft(ellipseEyeLeft, eyeLeft.X - ellipseEyeLeft.Width / 2.0);
Canvas.SetTop(ellipseEyeLeft, eyeLeft.Y - ellipseEyeLeft.Height / 2.0);
Canvas.SetLeft(ellipseEyeRight, eyeRight.X - ellipseEyeRight.Width / 2.0);
Canvas.SetTop(ellipseEyeRight, eyeRight.Y - ellipseEyeRight.Height / 2.0);
Canvas.SetLeft(ellipseNose, nose.X - ellipseNose.Width / 2.0);
Canvas.SetTop(ellipseNose, nose.Y - ellipseNose.Height / 2.0);
Canvas.SetLeft(ellipseMouth, ((mouthRight.X + mouthLeft.X) / 2.0) - ellipseMouth.Width / 2.0);
Canvas.SetTop(ellipseMouth, ((mouthRight.Y + mouthLeft.Y) / 2.0) - ellipseMouth.Height / 2.0);
ellipseMouth.Width = Math.Abs(mouthRight.X - mouthLeft.X);
To make our project more engaging, you can hide an eye ellipse if the eye is closed. Moreover, you can increase the size of the mouth ellipse if the mouth is open, and decrease its height if the mouth is closed.
// Display or hide the ellipses
if (eyeLeftClosed == DetectionResult.Yes || eyeLeftClosed == DetectionResult.Maybe)
{
ellipseEyeLeft.Visibility = Visibility.Collapsed;
}
else
{
ellipseEyeLeft.Visibility = Visibility.Visible;
}
if (eyeRightClosed == DetectionResult.Yes || eyeRightClosed == DetectionResult.Maybe)
{
ellipseEyeRight.Visibility = Visibility.Collapsed;
}
else
{
ellipseEyeRight.Visibility = Visibility.Visible;
}
if (mouthOpen == DetectionResult.Yes || mouthOpen == DetectionResult.Maybe)
{
ellipseMouth.Height = 50.0;
}
else
{
ellipseMouth.Height = 20.0;
}
Quite simple, right? You can now create awesome Kinect applications using a powerful and accurate API in a few lines of code!
Notes
If you are using .NET./WPF for your project, you’ll also need to add the following line under Project → Properties → Build Events → Post-build event command line. This command will import some necessary configuration files.
xcopy "C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0\ExtensionSDKs\Microsoft.Kinect.Face\2.0\Redist\CommonConfiguration\x64\NuiDatabase" "NuiDatabase" /e /y /i /r
If you are using WinRT, remember to open the Package.appxmanifest file, click Capabilities, and check the Microphone and Webcam capabilities. This will give your app permissions to use Kinect for Windows.
Next week: we’ll dive deeper into the Kinect Face API with Kinect Face HD. Stay tuned!
PS: Vitruvius
If you enjoyed this article, then you’ll love Vitruvius. Vitruvius is a set of powerful Kinect extensions that will help you build stunning Kinect apps in minutes. Vitruvius includes avateering, HD Face, background removal, angle calculations, and more. Check it now.
Want to hire my team for your next cutting-edge fitness app? Drop me an email.
Hello…
I don’t use KinectStatus property in V2.
How can i use Kinectstus property use.
for eg.
public class KinectStatusItem
{
public string Id { get; set; }
public KinectStatus Status { get; set; }
public DateTime DateTime { get; set; }
}
Hi Achal. There is no KinectStatus property attacked to the KinectSensor class in v2. There is a better mechanism: whenever you need to check the availability of the device, simply call the IsAvailable property. It’s just a boolean flag, indicating whether everything is fine with the sensor. Kinect notifies you about any state changes by firing the IsAvailableChanged event. Both the IsAvailable property and the IsAvailableChanged event are members of the KinectSensor class.
how do I fix this error
http://tinypic.com/r/51ezkg/8
oops my bad but after building and running this all shapes just stay in the top left corner , any idea why
Hi Mario. This is probably because your face is not recognized. Could you place a breakpoint in the FaceFrameArrived method? Is the breakpoint hit? If yes, is it hit for every frame or just once?
Also, ensure that there is sufficient lighting in the room. Face detection is affected by the light 😉
hmm will try that , this is what my debug output is like
The thread 0x1454 has exited with code 259 (0x103).
‘Kinecting for Windows – Expressions.vshost.exe’ (CLR v4.0.30319: Kinecting for Windows – Expressions.vshost.exe): Loaded ‘C:\Users\Mariano\Downloads\G2KExpressions-master (1)\G2KExpressions-master\G2KExpressions-master\K4W.Expressions\bin\Debug\Kinecting for Windows – Expressions.exe’. Symbols loaded.
‘Kinecting for Windows – Expressions.vshost.exe’ (CLR v4.0.30319: Kinecting for Windows – Expressions.vshost.exe): Loaded ‘C:\Windows\Microsoft.Net\assembly\GAC_MSIL\System.Configuration\v4.0_4.0.0.0__b03f5f7f11d50a3a\System.Configuration.dll’. Cannot find or open the PDB file.
‘Kinecting for Windows – Expressions.vshost.exe’ (CLR v4.0.30319: Kinecting for Windows – Expressions.vshost.exe): Loaded ‘C:\Windows\Microsoft.Net\assembly\GAC_MSIL\PresentationFramework.Aero2\v4.0_4.0.0.0__31bf3856ad364e35\PresentationFramework.Aero2.dll’. Cannot find or open the PDB file.
‘Kinecting for Windows – Expressions.vshost.exe’ (CLR v4.0.30319: Kinecting for Windows – Expressions.vshost.exe): Loaded ‘C:\Windows\Microsoft.Net\assembly\GAC_MSIL\PresentationFramework-SystemXmlLinq\v4.0_4.0.0.0__b77a5c561934e089\PresentationFramework-SystemXmlLinq.dll’. Cannot find or open the PDB file.
‘Kinecting for Windows – Expressions.vshost.exe’ (CLR v4.0.30319: Kinecting for Windows – Expressions.vshost.exe): Loaded ‘C:\Windows\Microsoft.Net\assembly\GAC_MSIL\PresentationFramework-SystemXml\v4.0_4.0.0.0__b77a5c561934e089\PresentationFramework-SystemXml.dll’. Cannot find or open the PDB file.
‘Kinecting for Windows – Expressions.vshost.exe’ (CLR v4.0.30319: Kinecting for Windows – Expressions.vshost.exe): Loaded ‘C:\Windows\Microsoft.Net\assembly\GAC_MSIL\UIAutomationTypes\v4.0_4.0.0.0__31bf3856ad364e35\UIAutomationTypes.dll’. Cannot find or open the PDB file.
‘Kinecting for Windows – Expressions.vshost.exe’ (CLR v4.0.30319: Kinecting for Windows – Expressions.vshost.exe): Loaded ‘C:\Users\Mariano\Downloads\G2KExpressions-master (1)\G2KExpressions-master\G2KExpressions-master\K4W.Expressions\bin\Debug\Microsoft.Kinect.Face.dll’. Cannot find or open the PDB file.
The thread 0x146c has exited with code 259 (0x103).
Seems like a missing reference, but you’d better contact the author of the sample you are trying to run.
Hi Vangos,
Great tutorial! Thanks so much!
How would you go about having more than one user being tracked at the same time? I tried having several FaceFrameSources and FaceFrameReaders with no luck…
Thanks again!
Hi Pol. Check this sample code and let me know if you are still facing any issues. I have used two face frame sources and two face frame readers. You can extend this solution to support even more.
That worked, thanks!
Hi Vangos,
Firstly thank you for everything, I see your working thats help me everytime.
I hava a new problem. I wanna do background substraction with kinect v2. is it possible with C++.
Regards..
Ozan
Sure! Launch the SDK Browser, click the C++ Samples and select the “Coordinate Mapping Basics-D2D” example. All the code you need is there.
Cheers,
Vangos
Hi,
Sorry for my rough English, i’m French. Firstly, good job for this.
I have just one question, I reduce my “camera” image object and moved at left of my window (for put options buttons at right). But i don’t know how adapt the ellipses positions on my camera ?
Regards.
Alban L
Hi Alban. You’ll need to move or resize your Canvas proportionally to the Image. I suggest you put the canvas and image into a viewbox element. This way, whenever the viewbox is resized, the canvas and image will resize automatically.
I had try to resize canvas and image object without success but, resize viewbox object is perfect !
Thank You very much!
Regards
Hello
I will try to apply this code with wpf application
and I don’t know whats the error for this :
public void Update(ColorFrame frame, ColorImageFormat format)
{
if (Bitmap == null)
{
_width = frame.FrameDescription.Width;
_height = frame.FrameDescription.Height;
_pixels = new byte[_width * _height * 4];
Bitmap = new WriteableBitmap(_width,_height,96.0,96.0,PixelFormats.Bgr32,null);
_stream = Bitmap.PixelBuffer.AsStream();
}
if (frame.RawColorImageFormat == ColorImageFormat.Bgra)
{
frame.CopyRawFrameDataToArray(_pixels);
}
else
{
frame.CopyConvertedFrameDataToArray(_pixels, format);
}
_stream.Seek(0, SeekOrigin.Begin);
_stream.Write(_pixels, 0, _pixels.Length);
Bitmap.Invalidate();
}
The code for Kinect v2 Santa Claus Augmented Reality
Hi Mar. You are using WinRT code in a WPF application. This is how you can display the color stream in WPF.
Hope this example helps you.
That worked, thanks ^_^
Hi,
just wanted to know , shall we use this same code with Webcam, sorry if its noob Question.
Hi Sachin. This code works using a Microsoft Kinect sensor. You cannot use it with an ordinary webcam. You could use OpenCV or EmguCV to achieve something similar using a webcam.
Hi Vangos
What I am trying to use this code as an addition to the code you created of the GreenScreen (background removal).
For some reason, the circles are not drawn exactly where they should be and they are shifted a bit to the right or to the left depending on where I am standing.
I think it is something related to the coordinate mapper
any idea?
Hi George. You just need to be in front of the sensor. The accuracy is better when you are in front view. It’s slightly decreasing as you rotate your head.
Hi
Well, it is not a problem of face detection, the Kinect sensor is already detecting my face, but drawing the circles, nose and mouth in the wrong place, so the eye circle is drawn next to my Eye, and not on the Eye, same with the nose and mouth, and when I move left & right from the sensor the circles get sometime further or closer to where my real Eyes/nose/mouth are.
it is very similar to the coordinate mapper example that you have on your blog, where the body joints need mapping to appear in the right place, but this time is with the face and not with the body joints.
Hi George. You can use Coordinate Mapper with Face HD. This is the source code for Face HD. A tutorial about Kinect Face HD is coming soon 🙂
Happy birthday 🙂 cheers
Thank you Abhishek! I really appreciate your comments. Apologies for my late reply. I’ve been extremely busy during the past few days.
I’ve used your face code because i find it most understandable over internet and implement a ‘pause play’ button code on it. When a ‘pause’ button is pressed I’m disposing colorReader and making it null, it perfectly holds my image at output window, but still it continues to take the input from face. When i further more dispose the faceReader and make it null, it successfully stop taking the input, but when a ‘play’ button is pressed (which restart the input taking process from kinect) it is showing my proper color image, but not taking any input from Kinect. Can you please tell me the other way around to implement it.
private void pause(object sender, RoutedEventArgs e)
{
if (_colorReader != null)
{
_colorReader.Dispose();
_colorReader = null;
}
if (_faceReader != null)
{
_faceReader.Dispose();
_faceReader = null;
}
}
private void play(object sender, RoutedEventArgs e)
{
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
NoKinect.Visibility = Visibility.Hidden;
_sensor.Open();
_bodies = new Body[_sensor.BodyFrameSource.BodyCount];
_colorReader = _sensor.ColorFrameSource.OpenReader();
_colorReader.FrameArrived += ColorReader_FrameArrived;
_bodyReader = _sensor.BodyFrameSource.OpenReader();
_bodyReader.FrameArrived += BodyReader_FrameArrived;
_faceSource = new FaceFrameSource(_sensor, 0, FaceFrameFeatures.BoundingBoxInColorSpace |
FaceFrameFeatures.Happy |
FaceFrameFeatures.LeftEyeClosed |
FaceFrameFeatures.MouthOpen |
FaceFrameFeatures.FaceEngagement |
FaceFrameFeatures.LookingAway |
FaceFrameFeatures.PointsInColorSpace |
FaceFrameFeatures.RightEyeClosed);
_faceReader = _faceSource.OpenReader();
_faceReader.FrameArrived += FaceReader_FrameArrived;
}}
Hi Beaya,
Thanks for your comment. I think the solution is simple:
1) Place the initialization code in the constructor or the WindowLoaded method. That is, everything you now have in your Play method should be moved outside of it:
public MainWindow()
{
_sensor = KinectSensor.GetDefault();
if (_sensor != null)
{
NoKinect.Visibility = Visibility.Hidden;
_sensor.Open();
_bodies = new Body[_sensor.BodyFrameSource.BodyCount];
_colorReader = _sensor.ColorFrameSource.OpenReader();
_bodyReader = _sensor.BodyFrameSource.OpenReader();
_faceSource = new FaceFrameSource(_sensor, 0, FaceFrameFeatures.BoundingBoxInColorSpace |
FaceFrameFeatures.Happy |
FaceFrameFeatures.LeftEyeClosed |
FaceFrameFeatures.MouthOpen |
FaceFrameFeatures.FaceEngagement |
FaceFrameFeatures.LookingAway |
FaceFrameFeatures.PointsInColorSpace |
FaceFrameFeatures.RightEyeClosed);
_faceReader = _faceSource.OpenReader();
}
}
2) Play method should include the following:
private void play(object sender, RoutedEventArgs e)
{
_colorReader.FrameArrived += ColorReader_FrameArrived;
_bodyReader.FrameArrived += BodyReader_FrameArrived;
_faceReader.FrameArrived += FaceReader_FrameArrived;
}
3) Pause method should simply unsubscribe from the events:
private void play(object sender, RoutedEventArgs e)
{
_colorReader.FrameArrived -= ColorReader_FrameArrived;
_bodyReader.FrameArrived -= BodyReader_FrameArrived;
_faceReader.FrameArrived -= FaceReader_FrameArrived;
}
Hope this helped you 🙂
Hi,
I’m trying to run the RT app to my surface pro 2 with Windows 10 OS. I keep getting this error:
“Could not find SDK “Microsoft.VCLibs, version=12.0”
I’m using VS2015 btw.
Hi Bryan. You should probably update VS libraries to the latest version. I’ll publish detailed posts about Kinect and Windows 10 when they are officially released.
Hello, I have a question when I get a facial point with kinect v1 with thw method get3dshape, those coordinates are un the real wolrd respect the origin of the sensor? Does ocurre the dame with the skeletons joint posición?
Thanks in avance.
Hi Med. The 3D coordinates are in the world-space, measured in meters or centimeters. Kinect v2 uses CameraSpacePoint. Kinect v1 uses SkeletonPoint. These are all measured in the 3D space.
Hello, I have a question , what is the best way to integrate Kinect with the gender and age range recognition?
Taking the frame received and passing it to another sdk or is there a better solution from the Kinect SDK?
Hi Leo. Kinect SDK does not have any built-in functions for gender and age recognition. You’ll need to load the byte[] of the color frame and feed it to an external service.
Hi Vangos,
Great tutorial you have there. Really helpful for me to start programming using kinect v2.
Just a question from me. I cant run the code because of this error :-
_colorReader.FrameArrived += ColorReader_FrameArrived; – ‘the name ” ColorReader_FrameArrived” does not exist in the current context’
Could you help to figure out what really happen here.
Really appreciate for that. Thank you
Hi Faiz. The error means that you have not declared the ColorReader_FrameArrived method. You can find the method here. Hope that helped you.
Hi Vangos,
Thank so much for the reply. I already fixed the error but somehow got another error which is;
” ‘Colorframe’ does not contain a definition for ‘ToBitmap’ and no extension method ‘ToBitmap’ accepting a first argument of type ‘Colorframe’ could be found.”
I try to figure out and fix it, somehow didnt manage to find out.
Would be great if you could show how to fix them.
Thank you.
Hi Faiz. You’ll need to include the ColorBitmapGenerator class into your project. It should work fine now.
Hi Vangos,
It worked well for me. Thanks for the guide.
Just for added features, may I know if can add face recognition features to the apps?
Have you tried before?
It’s a little complicated, but you can compare the relative distances between specific face points using HD Face.
Hello Vangos! how are you?
Is it possible to set the frame rate to the kinect?
We need to reduce 30FPS to 1FPS. There is a method like setFrameRate or something like that?
Thanks in advanced
Leo.
Hi Leo. Thanks for your comment. You cannot control the number of frames Kinect will generate. You can only use or capture the ones you need. The SDK has no built-in method to reduce or increase the frame rate.
Hi Vangos. nice work I appreciate your working with kinect V2. Bro I have one question. I’m a student doing my final year project using Kinect v2 can I Track eyes by the help of kinect for windows v2?
2nd i want to use the mouse with the help of nose in c# in FaceBasics. could you plz little bit guide me 🙂 thanx in advance…
Hi Abdul. You can check Kinect HD Face. There is an enumeration that lets you access the most common face properties. If you need a higher-level API, I suggest you tried Vitruvius. Vitruvius provides a Face class with properties including EyeLeft, EyeRight, Mouth, Nose, Jaw, etc.
Hope that helped you 🙂
Hello again Vangos. I have one more question to you.
Can i use the Kinect for windows v2 hand pointer on the windows desktop and windows 8.0 or 8.1 GUI instead of default mouse pointer of windows?
waiting for your reply, Thanx in advance bro
so nice of vangos , THANX for replying me back
Hi Abdul. Yes, you can do this. Check the ControlsBasics sample of the Kinect SDK Browser. This is by far the best way to do that.
Hi vangos. At first thanks a lot. Your work helped me lot.
I want to save the face images as jpg or png after being detected. How can i do that? I am not able to convert the face frames ToBitmap.
Thanks
Hi Surjit. A face frame does not have a bitmap equivalent. The color/depth/infrared frames are just some series of bytes. The face frame is a set of points. If you are drawing the points using a canvas, you could use a RenderTargetBitmap.
This is the suggested method: http://stackoverflow.com/a/14119523
Thank you so much … I have a question .. can I implement this code on Windows 8??
Sure! You’ll need to use WindowsPreview.Kinect instead of Microsoft.Kinect, though (the .dll is named differently).
hi,
Thank you for this tutorial. Is it possible to get FaceFrame without a body?
It seems that the Kinect doesn’t recognize my body because I’m sitting in a wheelchair. Does that mean without a body, the face tracking doesn’t work?
There’s a proprietary software trial that I’m evaluating. It’s able to detect my facial expressions, but none of the examples in the SDK is able to do it long-term, including your sample. My face is only detected for a few moments when I’m first positioned in front on the my computer/Kinect, then the facetracking is lost.
I tried setting a breakpoint in FaceReader_FrameArrived but all frames are null
Sorry, I’m replying to my own post to provide more information. The proprietary software I mentioned can read my facial expressions using the IR stream in the dark, but it’s too expensive so I’m just trying to replicate some of its functionalities for my own use, but so far I’m stuck at getting the Kinect to see my face.
Hello. I believe you can achieve what you need using Kinect. Yes, Kinect needs to track a body before tracking the face. However, the person does not need to stand. Simply do the following: while seated, move in front of the Kinect sensor, approximately 1 to 1.5 meter away. Then, face the sensor directly. It should pick your face. You can try launching the FaceBasics sample from the SDK Browser.
I have tested it successfully. If you have any problems, just let me know.
Vangos
Is there any way to allow it to detect the face at a closer distance?
Hi Farhan. No, Kinect would not let you track a person that is closer to 1.0 meter.
Thank you so much
Mr.Pterneas,
Do all features and methods of Kinect 2 work well in Windows 10??
Yes, as long as your system is compatible. You can launch Kinect Configuration Verifier from the SDK Browser. If the frame rate is between 28 and 30 frames, then it should be OK.
Thank you so much Mr.Pterneas
I have noticed that you used partial class in all your coding ..and I understand that we can use partial class when we need to spilt …
and you wrote this partial class just one time which is different than the partial class tutorials..( write tow or more partial class)
Can I understand that if you don’t mind ?
Hi Hanan. The partial classes are UI-related classes only and are generated automatically by Visual Studio.
A .xaml file represents the User Interface of an app. A .xaml.cs file is a partial class that manages the corresponding UI.
Hello Mr Vangos
Excuse me for the bad english, so i speak spanish, i do step by step your code, verify all configurations and run the final code but the window appear in white and a red circle (nose) in the left top corner, i dont understand because the code dont have errors, the only diference about you is that my operating system is windows 10. could you help me with the problem?
Hi Sebastien. Your body should be visible while running the app. Not just the face. I think that would resolve your problem.
Hi Vangos,
Thanks for the code and tutorial. good job 🙂
Just one question. Have you got any sample code to estimate head pose (yaw, roll, pitch) or face rotation angles? I am going to figure out the points that users are looking at. Can you share some code here or a reference?
Thanks Vangos.
Hi Parsa. Thanks for your comment. You can use the frameFrame.FaceFrameResult.FaceRotationQuaternion. This is a Vector4 with the rotation information.
I am using Kinect v2 with SDK 2.0 and C# WPF
Do you have any advice when FaceFrameResult always returns null? It is in a well-lit room. Face ranged from <1 foot away to about 3 feet away. This is on Windows 10 running both the .Net and the WinRT versions of Kinect2FaceBasic.
Hi Ethan,
First, please do the following: run the SDK Browser application (installed along the SDK) and select the Face HD or Face Basics samples (C#). Check whether the same bahavior is noticed.
Face tracking works between 3 and 8 feet.
what is the way to display any computational result between joints.. Display the result in the screen ???
Hi Sara. If you are using XAML, you can populate a TextBlock control and set its Text property to the value you need.
Hello!
How can I implement same functions above (eyes closed/ open, happy, engagement ,..etc ) in c++ ?
Yes, you can use the C++ Kinect API. Install the SDK, launch the SDK Browser from the Start menu, and check the C++ samples.
hello
I cant add Microsoft.Kinect.face.dll because there is no “Windows 8.1” extension ( cant found it in reference manager)
…and my system is windows8.1
Thanks
Hello. “Windows 8.1 Extensions” is only available when you develop Windows Store apps. If you are not developing Windows Store apps, Microsoft.Kinect.Face is located in C:\Program Files\Microsoft SDKs\Kinect\v2.0_1409\Redist\Face\x86\Microsoft.Kinect.Face.dll.
thank you so much
Hello !!
first of all thank you very much for the tutorial, it has been really helpfull to me.
Im trying to make this code work in unity and Im having some trobles with some of the lines of code:
Body body = _bodies.Where(b => b.IsTracked).FirstOrDefault();
the nity kinect library doesnt seems to understand this line of code is there other way to code this part?
thank you very much!!
I reply myself, the problem is that in the unity code I cannotuse the Ilist, and im using a body[] array to store the body data. and in this class I dont have the instruction where. how could I fixe it??
Hi Alfredo. Thank you for your comment. Add the following line at the head of your C# file.
using System.Linq;
This should work!
It works!! thank you!!
How to solve this problem ?
https://ibb.co/gPbBMQ
.ToBitmap(); is not workinr at visual Studio .. any other way??
Hello. I am with a problem in this line : camera.Source = frame.ToBitmap();
The erro is ToBitmap() — http://prntscr.com/gau7d3
Can you help me?
Thanks
Hi Vangos, thank you so much for your well-explained tutorial, it’s so helpful for my face tracking project.
I have a question, do you know how to get raw face points (x,y)?
I’ve been looking for it for days yet still get none result,
thank you before for the answer 🙂
Hello. Sure, you can get the 2D coordinates by extending the method described above.
First, ensure that you have added the “PointsInColorSpace” attribute when declaring your face frame source:
_faceSource = new FaceFrameSource(_sensor, 0, FaceFrameFeatures.PointsInColorSpace /* etc */);
Then, you can access the face points in the 2D color space (1920×1080) using the following code:
FaceFrameResult result = frame.FaceFrameResult;
if (result != null)
{
var points = faceResult.FacePointsInColorSpace.Values; // A collection of 2D points.
}
In case you would like to access the face points in the 2D depth space (512×424), use the “PointsInDepthSpace” attribute, instead.
Thankyou! I’d finally gotten it with taking x and y from cameraspacepoints 🙂
I want to save face frame result in excel file how can i do it?
thank you before for the answer
You can use the standard C# Excel read/write methods and store the X/Y/Z coordinates like this.
thank you Vangos for your response
The problem is not writing in an excel file but it is in the recovery of “face frame result” the values.
I arrived to display the results in a console but I have not arrived to store “face frame result” values in the excel file.
I tried with:
var eyeLeft = result.FacePointsInColorSpace[FacePointType.EyeLeft];
builder.AppendLine(eyeLeft.ToSting());
an exception is generated
The eyeLeft object is a CameraSpacePoint struct. You need to append the actual values to the StringBuilder, instead of the object itself:
builder.AppendLine(eyeLeft.X + " " + eyeLeft.Y);
How can I distinguish between real person and picture of person. Please help.
Hello. Kinect is using the infrared camera to detect a person and assign the corresponding Body/Face. So, it will not classify flat surfaces as bodies.
Your work is amazing!
Do you know how can I use the data from vitruvius to animate a facial rig in blender?
Thank you very much, Lucas 🙂
Vitruvius is providing you with the 3D coordinates of the Kinect face points. To animate a facial rig, you’ll need to transform the 3D coordinates into your own system. I am not sure how this could be done in Blender, though. Vitruvius includes a 3D face model with ~1000 vertices to help you.
Is kinect v2 using for multi face ? HighDefinitionFaceFrame
Kinect v2 has 2 face APIs. The Face HD API is described here.