First round of user tests done (the one with the more realistic prototypes – see below), but it feels like my findings were somewhat abstract. It however helped me generate good insights for which interactions are the common denominator of any public screen UI. It includes (initiation)- can be passive hence the parentheses, navigating, browsing, and selecting.
After a small consultation session with Thomas, I decided it was time to step back and identify “common UIs”, or typical user interfaces you visit on a day-to-day basis, and set up very simple lo-fi interfaces to test with different people. Without all the additional distractions on the screen (like from the first round of user tests) it might be easier to concretize which gestures are favored / used based on natural instincts by the users. In order to identify these patterns in people’s behavior, I created very simple UIs inspired by apps and websites everybody visits often by I doing some quick research and mapping some screenshots out based on statistics I found on google, then comparing them a bit in order to identify patterns between genres of applications and their UI patterns. With these common UI patterns identified, I designed UIs with elements you tend to find in everyday UIs, for example, a generic menu with 9 items that can be selected, a scrollable vertical menu with smaller, boxed, modals, horizontal scroll menus, UI’s with more than one section of menus, carousel menus, and UIs with a chain of text involved.
The specific patterns of behavior I’m trying to identify are four different main interactions. 1. Initiation, 2. Browsing, 3. Navigating, and 4. Selecting.
INITIATION
Initiation or activation of a screen is crucial because the computer cannot be constantly turned on, it saves electricity for example if we don’t want the screen to be on and running the entire time. Some sort of gesture to activate the screen is desired. To test initiation, I created a blank, black canvas, while narrating during the tests that this is a black screen because it hasn’t been activated yet, “how would you activate this screen if you are not allowed to use your voice to control it, or use capacitative touch to interact with the screen?”

BROWSING
For browsing, it was important to see how people would browse through a menu with multiple items, the key gestures I wanted to identify here are how people scroll a menu, especially when menus are oriented differently, for example, will gestures differ when interacting with a horizontal vs. a vertical menu? Also, do gestures differ when scroll bars or buttons are added? Furthermore, when it comes to browsing, I wanted to identify if people keep their hands up in frame just like when you were to control a mouse, you do not let go of the mouse, but would they put their hands out of the frame when resting?
NAVIGATING
In terms of navigation, I mainly wanted to identify how people would point towards certain items in a menu, for example, do people use the cursor? Or do people simply click like with capacitative touchscreens, directly at the item without navigating towards it? I also wanted to see how people “go back” to the previous page or minimize modals, do they use gestures that signify “back” or do they click the arrow button I present in some interfaces and not the others?
SELECTING
Finally, in most cases, the menus afford being selected. During the user tests, I state for the user to select a certain item (highlighted in yellow), after seeing how they browse through the interface, navigating towards the requested item, I wanted to see how they would select or “enter” the item. Selecting is possible for menu items, back arrow buttons, scroll bars (on drag), and more.












