After evaluating all the 11 gestures that were used in the four gestures combinations narrowed down from user test no.1 and 2, a specific few stood out from the crowd and we placed them into “gesture heaven”, the favoured gesture of them all, both by the users that participated in the user test, and by myself and Thomas, after having looked at all the gestures in a holistic manner.

From these selected few gestures, we managed to piece together two final contenders for the final verdict, while keeping them as different from each other as possible.

Once these two were selected, I took some photos from the front to imitate what the computer vision would see if the context was interacting with the public screen. This helped me gain a deeper understanding of why detection would work for some gestures but others not. I did conclude that the A/B options above both are high contenders because even though the hands can potentially get cut off when moved out of the view of the camera, these gestures all have clear points for the software to quickly pin-point. Furthermore the two browsing gestures both have start-and-finish states, meaning it would be easy on the software detection part, and false positives can be easier avoided. (eg. if we brose/scroll with the two finger gesture, once you scroll up— and point two fingers up, you need to beware and not point the two finger gesture back in the view of the camera or you may accidentally scroll the window down again).


