MI Finale

This journal entry consists of the final outcome of our MI project.

My partner for this Module was Denisa.

The final version of our code consists of interaction with the computer (expression) through free gestures by touching shoulders and wrist, and an ambient impression of sound back to the user.

Here’s a guide through of the portions of code that we have modified:

First, we implemented two different audio files, one that’ll be the output for one interaction and the other for the latter.

Then we started by experimenting with just the shoulders first. Taking the positions of two shoulders from two people, we calculate the distance between and as the distance gets smaller, the interconnected shoulderDiff value lowers as well which then manipulates the audio volume because the audio volume is controlled by shoulderDiff / 300 (so that value remains between 0 to 1).

Some commentary so you can see what’s going on
Getting key points for left and right shoulder of two people, targeting index 0 and 1, two people. Then we take the shoulder difference between both the outside shoulders and the insider shoulders, then named the shoulderDiff the lower value of them two.
Audio volume is adjusted based on the value for shoulderDiff, in the beginning we tried by simply adjusting the audio volume in increments like 1, 0.5, then 0 for audio to turn off) but later decided to work with the volume by gradually softening and loudening it. We thought this would make the interaction and impression more fluid to prevent a stepwise output.
Same thing for shoulders repeated but with the wrists instead, with this if statement, when the wrist difference is extremely small (<50) we tested this and it’s basically when to wrists are touching, a snare drum sound is revealed

At first, we had trouble understanding how targeting an index for the poses worked (poses represents person detected I assumed). After reading it a little more, we understood that we had to follow the parameters in getKeypointPos( ) to properly place the indexes behind, like getKeypointPos(poses, ‘leftShoulder’, 2);.

getKeypointPos()

With this final outcome, we were aiming for both surface free expression and impression modalities. We quickly realized that having a screen to show feedback at all binds the impression to a surface so we removed the camera feed,

and replaced the screen with a simple gif that plays while everything goes on, not interconnected with bodily movement or anything that’s going on behind the code at all.

Our UI that’s not connected to the code’s core functionality at all.
Testing if the distance works and if the audio is gradually softening.

Leave a comment