UPDATE (Dec 10, 2013): Want to learn more about WebSockets? Consider my new book, Getting started with HTML5 WebSocket Programming.
Kinect defined Natural User Interaction. HTML5 redefined the Web. Currently, there are various tutorials describing how to interact with a Kinect sensor using Windows Forms or WPF for the user interface. But what about using a web interface for handling Kinect data? Trying to combine those two hot, cutting-edge technologies, I came up with a pretty and open-source solution, which I am going to describe in this blog post.
I am going to use the official Kinect SDK for my demo, but the same principles apply to OpenNI SDK, too.
Prerequisites
- Kinect for Windows or Kinect for XBOX sensor
- Microsoft Kinect SDK (or OpenNI alternative)
Results
The project consists of two sub-projects: A server-side application which uses Kinect SDK and a client-side web page displaying the skeleton joints on an HTML5 canvas.
Here is the end result (current shot was taken from Google Chrome browser):
Tutorial
Here is a way to achieve the above functionality, step by step:
Step 1: Server application
The server application’s job is straightforward: Detect the users’ joints, pack the data and send them to the clients using web sockets.
In order to detect the joints’ coordinates, we need to add a reference to our preferred Kinect SDK and handle the skeleton events. I recommend Microsoft SDK over OpenNI because it’s far less complicated:
static void Nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
List users = new List();
foreach (var user in e.SkeletonFrame.Skeletons)
{
if (user.TrackingState == SkeletonTrackingState.Tracked)
{
users.Add(user);
}
}
if (users.Count > 0)
{
string json = users.Serialize();
foreach (var socket in _sockets)
{
socket.Send(json);
}
}
}
In order to send the data to the subscribed clients, we need to “pack” the joints’ coordinates in a way that the clients will be able to understand and process. I decided to encode the coordinates in JSON format, as JSON is a lightweight and easy-to-understand way of transmitting data through the web. You can find the JSON-encoder class in the source code files.
Considering the data transmission, I highly recommend the use of Fleck. Fleck, based on Nugget, is a C# web socket library that does what it says with minimum configuration effort: It broadcasts the desired data to the subscribed clients using lightweight web sockets. Here is how you can initialize a web socket server:
var server = new WebSocketServer("ws://localhost:8181");
server.Start(socket =>
{
socket.OnOpen = () =>
{
Console.WriteLine("Connected to " + socket.ConnectionInfo.ClientIpAddress);
_clients.Add(socket);
};
socket.OnClose = () =>
{
Console.WriteLine("Disconnected from " + socket.ConnectionInfo.ClientIpAddress);
_clients.Remove(socket);
};
socket.OnMessage = message =>
{
Console.WriteLine(message);
};
});
After collecting, packing and transmitting the data, clients can now consume and process them accordingly.
Step 2: Client application
Time for HTML5 bits! We have the server set up, so let’s add the web page clients. HTML5 spec recommends web socket support, currently implemented by Internet Explorer 10, Google Chrome and Mozilla Firefox. Web sockets are great for direct message communication between server and clients.
All we need is some JavaScript for receiving the server data and a canvas for drawing the points. Here is the JavaScript event handler for getting the data and drawing the joints to a Canvas element:
socket.onmessage = function (event) {
if (typeof event.data === "string") {
status.innerHTML = "Kinect skeleton received.";
// 1. Get the data in JSON format.
var jsonObject = JSON.parse(event.data);
context.clearRect(0, 0, canvas.width, canvas.height);
// 2. Display the skeleton joints.
for (var i = 0; i < jsonObject.skeletons.length; i++) {
for (var j = 0; j < jsonObject.skeletons[i].joints.length; j++) {
var joint = jsonObject.skeletons[i].joints[j];
// Draw!!!
context.fillStyle = "#FF0000";
context.beginPath();
context.arc(joint.x, joint.y, 10, 0, Math.PI * 2, true);
context.closePath();
context.fill();
}
}
// Inform the server about the update.
socket.send("Skeleton updated on: " + (new Date()).toTimeString());
}
};
Step 3: Mixin it up!
Now run the server application and then open the HTML file in Internet Explorer 10 or Google Chrome. Stand in front of a Kinect sensor and may the force be with you.
FAQ
- Do I really need a server application? Yes. You can’t run the Kinect SDK directly from your browser. JavaScript cannot access your hardware. The server app is nothing but a Visual Studio Console project!
- Do I need to pack my data to JSON? No. You can use any format you’d like. JSON is a common format for transmitting data, especially when file size matters. You could use XML or any other custom format, but JSON is lightweight and needs no custom parser. After all, JSON is a subset of JavaScript, so JavaScript can handle it directly.
- Why not using WCF instead of web sockets? Because it’s tougher. Using web sockets, messages can be sent after a server event occurs. And it’s a lot easier!
Downloads
You will definitely need to download source code and binaries in order to develop your own HTML5 Kinect applications. Enjoy!
I almost never drop remarks, but I browsed a few responses on Kinect and HTML5 using WebSockets and Canvas | Vangos Pterneas.
I actually do have 2 questions for you if you tend not to mind.
Could it be simply me or do a few of the responses look as if they are coming from brain dead people?
😛 And, if you are posting on additional online social sites, I would
like to keep up with you. Would you list of the complete urls
of your social pages like your twitter feed, Facebook page
or linkedin profile?
Hi, is there any way to change the codes to stream the RGB instead?
Hey. Yes, it is possible, however I would not recommend it. You should stream ~30 frames per second and perform body tracking at the same time. However, if you do want to stream the RGB frames, you should get the frame’s byte array (such as demonstrated here: https://pterneas.com/2014/02/20/kinect-for-windows-version-2-color-depth-and-infrared-streams/), send it to the web clients and then convert it into a bitmap client-side.
Thanks for your response. I’ve seen that page. I am using Kinect XBOX 360 (version 1). So, here is the case, I created an app that displays RGB and Skeleton to perform body tracking. When I start it, it will stream the RGB data ONLY to the web browser (to the internet). So, basically it is just like a live stream video using Kinect as the video source. I already create the website, the problem is I don’t know how to write the codes to pass the RGB to the canvas. 🙁
Here’s what I would do:
– Convert the byte array to base64 format (http://msdn.microsoft.com/en-us/library/dhx0d524(v=vs.110).aspx)
– Send the base64 format to the web clients
– Display the base64 string on an img tag or load it into the canvas (http://www.html5canvastutorials.com/tutorials/html5-canvas-images/)
Let me know if that technique worked for you 🙂
Hey, thanks a lot for ur reply. But I failed to use that technique. 🙁 I’ve tried it using the example you did here, but I change the skeleton frame to the color frame. So, basically I have this:
private static byte[] colorPixels;
static private void InitilizeKinect()
{
var sensor = KinectSensor.KinectSensors.SingleOrDefault();
if (sensor != null)
{
sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
colorPixels = new byte[sensor.ColorStream.FramePixelDataLength];
sensor.ColorFrameReady += SensorColorFrameReady;
sensor.Start();
}
}
static void SensorColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
if (!_serverInitialized) return;
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
colorFrame.CopyPixelDataTo(colorPixels);
String json = Convert.ToBase64String(colorPixels);
foreach (var socket in _clients)
{
socket.Send(json);
}
}
}
}
There I convert the pixels / frame byte array (colorPixels) to Base64 String (json). It’s showing no error and can be run. Now the problem is I am confuse to display the images/video on the canvas. Because we send the data as base64 String format, but then I saw that you parse it into JSON format. So, how to write the image sources? Here is my code so far in the onmessage function based on your websocket.js :
socket.onmessage = function (event) {
if (typeof event.data === “string”) {
status.innerHTML = “Kinect RGB received.”;
var jsonObject = JSON.parse(event.data);
var img = new Image();
img.onload = function () {
context.drawImage(img, 0, 0);
}
img.src = ?????;
}
};
I’ve done a lot of researches, but I’m still stuck and the web browser is displaying nothing on the canvas. Need your help here on encoding or decoding the passed base64 string in JSON format to images to be displayed on HTML5 canvas. 🙁 After all this is all about streaming the RGB to the HTML5 canvas. Anyway, thanks again! 😀
I’m working on it 😉
The code will be published here as well!
Thanks a lot! Looking forward to it~ 😀
Hi Vangos Pterneas, how are you doing? 🙂 any good news? I have tried to follow a lot of resources I found by using nodeJS or even using HTML5 get user media method. But, I couldn’t get a full explanation from the project which is using nodeJS. As for the HTML5 get user media, I can only use it when I turned my Kinect into a webcam. At the same time, I cannot run my app because the Kinect sensor has been used as webcam. The problem I am facing now is because of lack of knowledge for all this web-based things. So far I always working in Kinect for the skeleton data only and it was mostly only for activity recognition. That’s why I am so blur about all of these. Sorry to bother and asked for your help and thanks for your help 😀 btw, I found this video which can stream the RGB from Kinect: https://www.youtube.com/watch?v=KyAz37qDJCA Actually this is what I wanted to do, except for the activity recognition to stop the stream. So, I just want to stream the RGB only when the Kinect is start 🙂
Hey, I’m on it. I usually play with such projects during the weekends, so expect news by the end of this day 🙂
OK, I did it. Check https://github.com/LightBuzz/Kinect-HTML5
Cheers!
wow~ thanks a lot! 😀 it’s working well~ I will go through the codes and learn it. If I have any questions I will ask~
Again, thanks for your help! you’re awesome! cheers! 😀
I get 2 errors when i execute the solution..( I am new to learning web sockets :(..)
1.The type or namespace name ‘Fleck’ could not be found (are you missing a using directive or an assembly reference?
2The type or namespace name ‘IWebSocketConnection’ could not be found (are you missing a using directive or an assembly reference?)
Your help would be appreciated 🙂
Don’t worry. It is just a missing reference. Just go to Tools -> NuGet Package Manager -> Manage NuGet Packages for Solution -> Search for Fleck and add it to the project. Everything should be fixed. Then you should just import the reference by writing “using Fleck;”
Thanks Vangos for your help.Got that error fixed:) Now when i run the server,after some time
In this particular line of code,
var stream = new FileStream(file, FileMode.Create))
I get , the requested operation cannot be performed on a file with a user-mapped section open. Whats the mistake I am dng?
Again,thanks for your help 🙂
Seems like you have locked dll files. Try to reboot Visual Studio and verify that no dll files are opened while the application is running.
With this example,will we be able to pass kinect data from WPF application to Windows8 store apps?
Kinect 2 will probably support WinRT. So, I suggest you wait a little 🙂
For Kinect 1, I think (however, I am not 100% sure) that your application will not pass the certification process if it relied on another application that uses Kinect.
So, yes, you can do it, however I bet the Windows 8 app will not pass the Store certification process.
Thanks for your help vangos..Really appreciate it…:)
Hi!
Maybe I’m missing something, but when I open the Kinect.Server (on Visual Studio) and try to build I get two errors:
Error 3 The type or namespace name ‘Fleck’ could not be found (are you missing a using directive or an assembly reference?) C:\Users\vitorfreire\Documents\Kinect-HTML5-master\Kinect.Server\Program.cs 5 7 Kinect.Server
Error 4 The type or namespace name ‘IWebSocketConnection’ could not be found (are you missing a using directive or an assembly reference?) C:\Users\vitorfreire\Documents\Kinect-HTML5-master\Kinect.Server\Program.cs 12 21 Kinect.Server
I didn’t see any .exe file that’s why I guessed that’s what I need to run the server application. Is that right?
My other question is if I could use the actual Web Application sample that is inside the official sdk, And how’s that would affect the client side?
Big Thanks!
I didn’t read the comments! Sorry! I’m trying to follow the directions above.
So the problem I’m having now is just that the app (Kinect.server) crashes saying Kinect is already in use in another process.
But there is nothing using Kinect, I reboot, made sure of that. Any idea of what could possibly be going on?
//
Unhandled Exception: System.IO.IOException: Kinect is already in use in another process.
at Microsoft.Kinect.KinectExceptionHelper.CheckHr(Int32 hr)
at Microsoft.Kinect.KinectSensor.Initialize(SensorOptions options)
at Microsoft.Kinect.KinectSensor.Start()
at Kinect.Server.Program.InitializeKinect() in \……\Program.cs:line 69
at Kinect.Server.Program.Main(String[] args) in \……\Program.cs:line 52
Hi Vitor. Let’s try to find out. First of all, you need to have Fleck installed. You can download Fleck via Nuget. This way, you sould get rid of any compiler errors.
Then, you hit Start/Debug to run the server application. Finally, you open the index.html file in your browser and wait for the connection.
Regarding the last error, please run one of the Developer Toolkit examples to verify that the sensor is connected appropriately and let me know 🙂
It’s working! So I’m working on this art project and as I’m not really a programmer, some of my challenges are a little bit basic.
1) I want to assign specific functions to specific skeleton joints. The way you coded in this example – just getting the Skeleton stream and sending as a json – I’m having a hard time finding where I should tackle it. In the official sdk they have this ability to draw parts with jointCollection:
// Render Head and Shoulders
DrawBone(jointCollection[JointType.Head], jointCollection[JointType.ShoulderCenter]);
DrawBone(jointCollection[JointType.ShoulderCenter], jointCollection[JointType.ShoulderLeft]);
DrawBone(jointCollection[JointType.ShoulderCenter], jointCollection[JointType.ShoulderRight]);
//
I’m wondering if I could use something like that. Or just access the points the kinect sdk gives to you by default, the ids. And where in Program.cs would be the best place.
2) The second step of my project is to connect it “online”. Basically I will have kinects running this app in two different locations, and they will both send skeleton data to a specific page. I’m planning to use NodeJs but as I see from your example using websockets, this might be easier. Do you have any advice to enable something like that – skeleton data from a second kinect/different computer? Or any recommendations or things I should be aware?
Big thanks. This has been super helpful so far. This is saving my thesis at ITP!
Best,
Vitor
Hey Vitor. Thank you very much for your comments and feedback. Wish you all the best for your thesis. Let me know when you publish it!
Regarding your questions:
1) As there is no official API in JavaScript, I packaged the joints using JSON. I have included the joint name in the JSON properties:
var joint = jsonObject.skeletons[i].joints[j];
var name = joint.name;
2) I am not aware about NodeJS support. I prefer C# WebSockets. I would send a sensor identifier to the WebSocket clients, so to know which sensor generated the frames and display the data accordingly.
Cheers,
Vangos
It’s very simple to find out any topic on web as compared to books, as I found
this piece of writing at this web site.
Thanks!
So I’m connecting two different computers and now I tested in a different one and I received a message that I have never seen before, while trying to run the Kinect.Server
https://www.dropbox.com/sh/6znm0lr5o7npo7a/yUoCFJiZ3l
Some of it is in portuguese, I don’t know if it helps. As I’m using only skeleton data, I’m wondering what parts about color/depth I could “comment out” from the app, to make it faster or even prevent this issue.
Have you ever seen something like that?
Trying to translate…
“An unhandled exception has occurred…”
“The requested operation cannot be performed on a file with a user-mapped section open. “
I sent the whole app, and asked them to run the Kinect.Server on debug.
Is there something I should do in order to provide this app to another machine? Do they have to open on Visual Studio and build it there?
Hi Vitor. I am checking it out and reply back here so to let you know.
Hello Vangos,
I’m Praneeth Doing My Masters In Elecrical Engg. In BTH, Sweden. I’m also working as an intern as a Software Developer in a start up company which is based here. I read your article – https://pterneas.com/2013/12/10/kinect-and-html5-using-websockets-and-canvas/
and am very impressed with the detailed explanation. But I am facing some problems regarding the same. I am using Windows 8.1 and MS VS 2013 with Fleck and Json packages from Nuget. I Installed Sdk v 1.8 and toolkit v 1.8 too. and also GS 4.Refresh. I sincerely await to get some valuable advice form you and hope that i’m not troubling you much.
1. I’m developing code to save continuous stream of skeleton coordinates and also some specific joint angles. and i have successfully done that. Now the problem comes when i want to stream that kinect video.
2. where do i fit in the code snippets that u explained above in that page in my programming ?
3. because i cannot find definitions for some commands like e.skeletonFrame.skeletons , users.serialize , _sockets , _clients .
I’m sending you my code.
I’m really anxious about this and would love to hear from you with some Advice soon.
Best Regards,
Praneeth.
[I’m really very sorry for this long message]
[ I HAVE ALREADY ATTACHED THE CODE WHILE WRITING TO YOU IN FACEBOOK. PLEASE DO CHECK IT OUT. ]
Hi Venkata. You can find the complete source code on GitHub: https://github.com/LightBuzz/Kinect-HTML5
It contains the answers to all of your questions, including the video streaming 🙂
Hi Vangos,
Thank you for the reply. But the thing is i’m relatively very new to networking. I have been a core programmer of intermediate level my whole life. So networking is still a novice for me. I downloaded the source code but it seems to have its own code where kinect coding is used. But i already have xna basics [microsoft tutorial] which i edited according to my requirements to get me continuous data of skeletal coordinates and also the angles of specific joints. So how can i make my program work by using fleck and websockets and json ? do i have to insert the commands from fleck extension ? or else do i have to create classes for the networking tools too ?
Really sorry for all the trouble.Hope to hear back soon.
Best Regards,
Praneeth
Nice project! Working great of old connect. Does it work for V2? Thanks.
Hi Dan. Thanks for your comment. I am planning to publish a Kinect 2 version by the end of this week!
Great to hear!
Am I missing something? Could you tell me where is the kinect 2 version?
Hello Vangos,
I am wondering whether you got around to publishing a Kinect v2 version of this code? If so, where would I be able to find it? I am a masters student and it would be very helpful for my research! 🙂
Thank you,
Callum.
Hi Callum. There is not a v2 implementation yet, though it is fairly easy to port the code using the new SDK. Most of the functionality remains the same; you only need to change a few class names and variables.
Hi Vangos. Awesome stuff man!!!
My name is Dan and I run an open source HTML5 game framework called Kiwi.js, we were recently asked by somebody in our community about Kinect support. Its not something we know much about and what you’ve made looks just like what the doctor ordered. Would you consider helping our project out and making a Kiwi.js Plugin for our community?
Either way it’s great to connect!!
Best,
Dan
Hi Dan. I am aware of Kiwi.js, though I haven’t used it in any projects – yet. Really nice work there. It’s something I’d be glad to consider. Feel free to email me any details (https://pterneas.com/contact).
Fine way of explaining, and nice piece of writing to obtain data
on the topic of my presentation subject, which i am going to present in school.
Hi Esther. I am glad you found this article useful.
Have you tested this with the Kinect v2 ?
I just did and wasn’t working.
Thanks!
I got it working with Kinect v2. Just change some variable names (as things like skeleton have been changed to body, etc). A good resource is the Kinect v2 documentation and examples from this site to use as a reference! 🙂
Also, Vangos, thank you very much for releasing this! It has saved us a heap of time! 😉
This is awesome, Callum! Great to hear. Let me know if you’d like to include your code as part of the open-source project. Of course, you’ll get the credits for your work.
Cheers,
Vangos
Hello Vangos,
Sorry for the very late reply… I got very busy with work!!
I am happy to provide the code for this to work with Kinect V2 if it is still required, please let me know how I can contribute this. 🙂
Cheers,
Callum.
Hi Callum,
Sure, feel free to contact me or fork the existing repository.
Cheers,
Vangos
I have forked the repository and updated the code for the Kinect version 2. Link: https://github.com/doburuman/Kinect-HTML5
Cheers,
Callum.
This is awesome Callum. I’ll update the repository and credit you accordingly 🙂
No worries. I hope someone will find it useful! 😉
I posted before taking a look the earlier comments.
I look forward for a new version compatible to the v2.
You mentioned that is would be only a matter of “change a few class names and variables”. Could you point some resources to find those differences?
I’m trying to open it with the last updated version of Visual Express Studio 2013 and I got a very weird error:
Unsupported
This version of Visual Studio is unable to open the following projects. The project types may not be installed or this version of Visual Studio may not support them.
For more information on enabling these project types or otherwise migrating your assets, please see the details in the “Migration Report” displayed after clicking OK.
There is any tutorial about how to stream the Kinect RGB Camera (c#)?. I want to make an android app to visualize the frames.
Thanks! and Vangos great site!
You can check this project on Github. Do not forget to favorite it if you found it useful.
Thanks, just one more question, how can I introduce the Fleck dependency? when I tried to compile your project there was a missing reference to Fleck, the I downloaded it, but I don’t know what I am suppose to do, to solve the missing dependency. Can you please tell me how the dependency of Fleck is introduced?
Hi John. After downloading Fleck, you need to open the source code in Visual Studio and build it. It will produce a .dll file. Import that .dll file (“Add new reference”) in your project.
Alternatively, you can install Fleck using NuGet: https://www.nuget.org/packages/Fleck/
Hope that helps 🙂
There is two parts:-
1. Server Side
2. Client Side HTML
Server Side is OK.
For the client side I want to use WPF Application to view the ColorImage from another / Network PC. How is it possible? Can I access WebSocket using WPF Client application rather than HTML.
Regards,
Hi Imran. You can still use WebSockets to transmit the data. Fleck is a WebSocket framework that works with WPF, just like the console program I demonstrated in this blog post.
Καλημέρα! 😀
ΟΚ Ι ll write in english.. First of all, thanks for your efforts in this blog, you ve been and invaluable help even to me, an old mathematician-matlab guy who has to learn some basic development stuff now :p . Secondly, I apologise already for the -most probably- stupid question, but I am baffled :S.
So, I ve got a kinect stream coming as a json object, I do my javascript stuff and I draw it on a canvas etc. Everything is peachy so far, except when I try to rotate the skeleton around the y axis and redraw it. I take each joint, apply a quaternion (with the three.js library) on it and try to redraw it. The code runs perfectly. It just doesnt work. I am aware that on the canvas we see only the xy plane and not the xz, so the drawing would not seem correct anyway. But when I check the angles between the original and the rotated joints, it seems that nothing has happened. I tried the exact same code on a 3d scene with three.js and single spheres and everything worked perfectly. But with the kinect stream, nothing. Dont worry, I dont want you to see or edit my code (i ve got a relevant stackoverflow question though, here http://stackoverflow.com/questions/34338103/three-js-wrong-quaternion-rotation ). I would just appreciate any hints and tips on how I should go with it or if I have sorely missed something. Sorry for the long post. Happy Holidays
Hi Stefanos. Only the 2D coordinates of the joints are sent via the socket. The values are measured in pixels. If you need the 3D coordinates (which are, instead, measured in meters), you need to package and transmit this additional information via the socket.
Hope I helped you 🙂
Enjoy your holidays!
Ola
Good share.
In my opinion the most stable iptv are those from Fishbone cccam
I would like to see more posts like this
Thanks
Good Day
I went through this article. I am still unable to get Color and Depth image in HTML5 with Kinect V2. Only Skeleton is working. Can you please provide more help.
https://github.com/LightBuzz/Kinect-HTML5 (Working With Skeleton Only)
https://github.com/LightBuzz/Kinect-HTML5/tree/kinect_v2 (Working With Skeleton Only)
You need to serialize the color or depth data using Fleck and then deserialize the data in JavaScript and feed them to the img component.
Thanks For Reply. Can I have any piece of Code Related this?
You could send me an email and discuss a custom consulting project.