To capture the joint positions of the movements of the selected actions, we used the Azure Kinect device with ofxAzureKinect openFrameworks addon to
save the x, y, z coordinates of the 32 joints in text files using ofBuffer object (
link to code). Joint coordinates are joined by commas (,) and semicolons (;) in between joints and a line return at the end of each frame. The saved files look like the above screen capture.
For the shadow-like visualization, we wanted to recreate
metaballs Processing sketch from Dan Shiffman’s Coding Train in openFrameworks, but had some difficulties. We found that the exact same code structure of the Processing code was very slow in openFrameworks (getting frameRate between 1 – 5 fps). With Elias’ help, we were able to figure out how to d
raw 2D metaballs as a shader, which enhanced the processing speed drastically.
We continued to improve the visualization for the shadows. We built switch cases for different joints to be able to render a more shadow-like visualization. Each joint has different size of area in the human body and only by specifying the area for each joints, we were able to generate the shadow animation like above.When we tested out visually representing human bodies in front of the Kinect in real time, we realized that this method of visualization using switch cases would make the arms look very weird. Because the elbows in most of our saved animations were not angled, the visualization looked realistic, especially when the bodies are sideways rather than facing front, but when someone stands in front of the Kinect in T-pose, it was obvious that there were distinct ellipse in the place of joints of the arms. We decided to compensate for this stark separation of ellipses around arm joints by
adding opacity and blur to the shader as well as the background draw of the application (
Link to code).