I refer to the last week of the first module as “phase two” for we did some backtracking and reevaluation of the literature and started fresh on a new sketch during this week.
Similar to the first sketch we worked with (emotion), it was challenging deciding how to approach and tackle the code. Personally I felt like the code was way more straightforward to understand, and after reading about the library via the github repo from the original creator of posenet.js, I had a clearer grasp of how things work. But to not decide on a concept and work towards a destination, it remained quite challenging.
Denisa and I decided to work with the shoulders and wrists as the main gestural surface-free expressive modality. When it came to impression (or the output of the program) it was difficult to settle with anything as of yet. In the end it came down to using an audio output again, but perhaps implementing some sort of twist so that there isn’t a visual feedback on the screen for view of the users.
According to our interpretation, audio is simpler to work with, and it caters towards a more fluid interaction. If an audio softens gradually (instead of a stepwise output), we believe it leans closer towards a fluid, continuous ripple-effect type of response, which is what faceless interaction demands.
