Prototyping for user research

Hi again! This is an update of what I’ve been up to for half of week 40 (took two days off because of my move to Malmö!), up until now (Wednesday week 41). I’ve been making prototypes using Figma. The purpose of creating these prototypes is to be able to aid me in user research, when I investigate patterns of interactive behavior when users that will undergo my study try to behave with these interfaces when they’re not allowed to use touch.

Basically, my idea is to create different interfaces (in different contexts, and different service procedures), and ask interviewees to behave accordingly. I will display an interface in front of them, observe how they behave with it, and try to wizard of oz their behavior and help control the interface manually, I don’t really know exactly how it will be carried out but for now that’s my idea. For example, if a tester decides to use their thumb as an “air cursor”, I’ll try to use the cursor and follow their thumb, pretending that there’s some sort of AI behind the camera but it’s just me 😉 mostly I’m interested in their first instincts of behaving, eg, raising up their hand or fingers? nodding? pointing? waving?

The four contexts I’ve chosen are flight infotainment screens, grocery store waking a screen and also scanning/payment, navigating a ticket buying booth, as well as an info kiosk in a mall. From tomorrow (Friday w41) onwards for a week, I’ll be testing these interfaces with people an analyzing the results as I go. With these interfaces, I can even just iterate on them and maybe eventually using them as final prototypes.

How do I know when I’m “done” with these prototypes? Thomas asked. I’m definitely a finisher, so I would say that when it’s pretty enough, and the functionality works. Basically I would want it to look and mimic the real thing as well as possible. But for this part, I felt done when I got all the “procedures” in, meaning, enough so that I can properly observe how people interact in these specific contexts. (infotainment: reaching to the movie playing screen, ICA: up to screen when payment is complete, Info mall map: where all the buttons I offered on main interface can be “interacted” with > clicking on stores listed / zooming in and out of map, and finally when you’re able to navigate to checking your time and prices for the train tickets).

SAS:

ICA:

SKÅNETRAFIKEN:

TRIANGELN:

An update

Long time no journal! it is currently week 41, approximately 2 weeks since my last post. Around the end of week 39, after the “failure” of my first round of interviews.

TL;DR, the brief provided to me was that there’s an increased importance of hand hygiene due to the pandemic and there’s been a demand for touchless screens from clients that manufacture screens for different services. When I interviewed around 17 people however, only 4 people returned about noticing a difference in behavior when interacting with touchscreens, 3 of which only noticed mild changes, and 1 noticing an extreme change.

At this point, I felt kind of stuck. How am I going to make valuable design decisions if it’s not exactly backed up by research? I looked to Thomas for help again, and he proposed discussing this matter with his boss. Temporarily he recommended to not focus so much about the pandemic aspect of influence, but just in general, so for the last few interviews (out of those 17) I rephrased some of the questions, changing the “change in behaviour” questions to a more general one, e.g, not saying “has it changed since the pandemic broke out”, and instead saying “has it changed at all in recent years?”, this motivated richer responses as it was more open to interpretation. For example, one of the interviewees mentioned that she was more familiar with the logic behind touchscreens and even though in the beginning she struggled with these machines a lot, she eventually got used to it. This showed that even though touchscreens have once been a struggle to utilize, the interaction eventually became deeply engrained in its user’s minds. If touchscreens took some learning, what about new interactions we propose?

Thomas came back the next day and proposed a new context that hopefully could inspire me in my process. The context he proposed was the flow of using in-flight infotainment screens. Ever since the pandemic broke out, concerns of being in an enclosed space where air gets circulated and recycled constantly for the span of the flight time arose. One thing that’s common on long-distance flights are infotainment screens— these touch screens you have attached to the seat in front of you where you can interact, either with your fingers or with a remote (both requiring you to “dirty” your hands). In the wake of the pandemic, potentially introducing gesture control into these screens could be an interesting area to explore.

With a more specific “context” in mind, I felt like I finally had a direction. During the interviews, somebody had actually mentioned these infotainment screens, and how they didn’t dare touching it since the virus broke out. I thought to implement some flight screen focused questions into interviews:

  • Have you had a long flight experience? What do you usually do on flights like these?
  • Have you encountered infotainment screens onboard these long flights?
  • What do you think about them in general?
  • How do you think they can be improved in general?
  • What other services do you wish to be included in the infotainment screens in addition to “information” and “entertainment”?
  • What do you think about the process of “ordering food” on board a long flight?
  • Have you ever shopped duty free on board a long flight? How was it carried out? Do you think that interaction can be improved somehow? Do you think it needs to be improved?
  • Back to the screens, imagine if you could board an aircraft for a lengthy flight now, do you think your attitude/behavior with them will have changed?
  • Imagine not being able to “touch” the screen, what other interactions can you imagine being used instead? (e.g pressing buttons, swipe left swipe right) How would you maneuver these actions?

Since then, I was only able to interview four people where some of these questions were implemented. However, Thomas didn’t want the project to be solely focused on this context, it simply acts as a direction, or an example, for where there’s been a demand for touch-less interactions, to help widen the scope a bit. I therefore decided to move on from asking questions in interviews, and took a different direction, prototyping!

Update from previous post

As a follow-up of my previous post, I managed to gather two people to interview for my project! Both of them were my friends and they might’ve had a little idea of what I’m working of with my project prior to the interview but I think their answers were genuine!

During the interview I realized that I had to already modify the questions I asked. For example, my first question written down was “what do you think about touchscreens”, when in reality, I wanted to learn from the interviewees first if they do use touchscreens and under what circumstances do they prefer (or not) to use touchscreens.

Thomas gave me and idea of potentially “painting scenarios” for interviewees before actually starting the interview, for example, “you walk towards the cashier with a basket full of groceries, do you head towards the queue leading to the cashier register, or do you head to the self-checkout machines?”…

Just watching people doesn’t help me much…

Day 1… I haven’t gotten much success today, went to a few locations to people watch, first in Centralen i Lund by the multiple ticket vendor stations, then to the basement of the Emporia mall in Malmö because there’s a concentration of people, grocery stores, and mall information kiosks all in one place. Today I realized that it was perhaps not a good idea to have had so much faith in doing observations. The “insights” I ended up generating from watching people were all rather biased. For example, I saw a teenager who had a face mask purchasing a ticket at the train station in Lund on one of the ticket machines. Because of my personal bias, I basically saw what I wished to see. When I saw the teen tapping the screen, I interpreted his actions as “meticulous and anxious”, just because I assumed he’d be because he was wearing a face mask, when in reality, in his mind he might’ve behaved normally, as normal as he did before the pandemic even started. His subconscious behavior cannot be “interpreted” a certain way, and certainly not with my bias in mind. This “insight” could have only been confirmed if the observation was accompanied by an interview.

What has helped me by observation so far is that I was able to narrow down my target group, for example, from observing at the grocery store, I noticed that though self-check out machines are open to the public to use, the elderly still prefer to use the physical cashier to count their items and payment. This could be due to the fact that they find a person counting the payment to be more reliable than themselves and a machine, and also due to the fact that a lot of elderly are more familiar to this day to pay with cash instead of card, and the self-checkout does not offer that option. In the train station, my assumption was challenged when I thought only old people would be using the machines because they’re not familiar with smartphones and the technology there, however, I was surprised to find a lot of teens using the machines, as well as adults who look like they had been travel from a different region in Sweden, or maybe even tourists from abroad. It makes sense for these users to be using the ticket machines because for younger adults— they might not have a smart phone yet and for the tourists— they don’t have the Skånetrafiken app in their phone because they don’t live around here long term. These were some interesting insights I gathered!

In terms of interviews, it’s been rather difficult to gather the courage to ask people questions. I tried for two people at the train and one at ICA emporia but people were all in a rush and/or uncomfortable to speak, which is understandable because most people are extremely intent on keeping distance on others. I think instead of finding people in the field it might be easier for me to contact people for interviews online– like people from home and people I can find on a Malmö Student Facebook group / any other Facebook group for example (This is a method we used to gather interviews during service design last semester when meeting people face to face was not encouraged). Of course I will continue to keep an eye out when at a grocery store self-checkout, but for now I think I will schedule some zoom calls with people and see how it goes from there.

w37 Fieldwork planning!

Two weeks of desktop research has been quite tiresome, I’m excited to go out in the world to explore reality! How do people interact with public screens? Are they aware of the hygiene of these screens? Do they care? If they do, what would they prefer? There are so many questions!

For the latter part of the week I mainly did research on conducting successful feel studies, such as how to prefer observations, if surveying is needed, how to generate qualitative surveys so responses are more meaningful than quantitative ones, and how to ask questions in field studies / during interviews. I also made clear to myself what goals I have for my field studies, and wrote down which locations I plan to observe interactions, such as malls, retails stores, and fast food restaurants. Ticket booths at train stations are also a valid location but I’m afraid people rely more on their cellphones these days to buy tickets so the target group that utilizes these ticket kiosks are quite constrained.

Then, I jotted down some interview questions that I plan to use during field work. I anticipate field work to take about 1-2 weeks. See my questions below (note: the questions might vary from individual to individual because new questions might pop up in the middle of an interview, or an individual may answer two questions at the same time without knowing)—

  • What do you think about touchscreens? Especially recently with the coronavirus pandemic?
  • How do you think the increased importance of hand hygiene has affected your interaction with public screens, if at all?
  • (If a person is concerned about hand hygiene) How would you instead, prefer to interact with public screens?
  • What other alternatives can you think of that could potentially replace the interaction of touch with public screens
  • When else do you interact with public screens/where? -too broad? irrelevant?
  • What was easy or difficult about navigating the interface? -irrelevant?
  • When do you decide to use the touchscreen instead of other alternatives (e.g snabbkassa instead of register) -maybe already know the answer to? most people will probably answer efficiency, speed

Ones marked in red are questions I’m still unsure about asking due to reasons like broadness or relevance. I learned about avoiding leading questions and asking open-ended questions instead of closed-ended ones on an article on the Nielson Norman Group webpage. The author recommended to begin questions with how or what, when, where, which, who, not with was or other verbs like to be and to do. And also to avoid why questions.

CORONA!

Interviews might be a little scary to carry out because people are encouraged to partake in social-distancing, so this may be a challenge to me. But observing from afar may still help me with gathering relevant insights

TUI Limitations

As an intern at a company that is attempting to revolutionize gesture control with interfaces, it’s important to assess the current most common interfaces, the touch user interfaces (TUI) and its limitations to further back up why Gesture controlled UI is the future.

Touch interfaces are great because they’re intuitive and offer tactile responses. They’re powerful because they afford gestures that people are used to and have “acclimated” to over the years, such as drag, pinch, double-click, zoom, etcetera. But with the importance of hand hygiene on the rise because of the covid-19 pandemic, anxious users meticulous about hygiene are more hesitant on utilizing public touch interfaces due to the uncertainty of whether they’re hygienic or not. That’s why gestural control is slowly starting to be in demand on the market, seeing as mobile / touchless ways of paying has already revolutionized the market.

Other disadvantages of TUIs include it not being as “accurate” as a traditional mouse & analogue buttons. In addition to that, they don’t exactly replicate these traditional methods (e.g touchscreen keyboards have small buttons and often have a lack of tactile response compared to mechanical keyboards). Tapping the screen also involves obscuring the screen itself, which is a problem for applications that display complex and real-time information, and this can potentially be avoided with gesture control.

While TUI has its limitations, it’s also important to not be too ambitious with gesture control. Some development obstacles of gesture control is that, if applications require too many gestures, it may be difficult for developers to write routines for mobile touch screens. If interaction isn’t smooth like any other sort of HCI, the user won’t continue usage. And if program keeps for example, misunderstand a user’s intentions, the user may quickly get bored or even annoyed.

It’s important to assess all these pointers, what TUIs are better or worse at doing, and also what development obstacles gesture control can encounter. This helps me stay grounded during this research process.

w37 Update — early thoughts

The week started by some inspiration seeking research across the web. I visiting the Nielson Norman website like I always do during any Interaction Design research phase. I did some research about intuitive design (what makes design intuitive, and how to find out during fieldwork/user studies whether a design is intuitive, etc.) as well as creating better designs for UI. I thought these two areas were relevant for research because making a UI intuitive would be crucial for gesture control. Below consists of what I found during my research in combination with some scrabbled thoughts…

When touch screens and capacitive interfaces took the world by storm, everyone was forced to adapt to the usage of smartphones as mobile phones with analogue buttons began to phase out. A decade ago, the older generations were still struggling to adapt to newer technologies, nowadays my grandfather of 85 years is more skilled at using a smartphone than myself, mastering all possible levels of Candy Crush. How did we manage this transition?

In 2007, Apple popularizes the touchscreen in consumer electronics with his original Apple iPhone, featuring a touchscreen instead of a physical dialing pad. Though many to this day believe that touchscreen devices like smartphones and tablets are not necessarily essential, a large portion of the population (namely 52% of the US population) claim that they cannot live without their smart phones. Why do we like smart phones so much? It’s because humans have developed almost an emotional attachment to them, and manufacturers and experience designers have managed to understand and enhance that by constantly improving these devices. According to psychologist Larry Rosen, phones have became part of humans’ extended selves since late teen or young adult years, as it offers “a space where the self can be satisfied, play and feel alive.”

Touchscreens revolutionized the tech industry, as the finger was able to replace additional, bulky hardware, such as physical keyboards, the mouse, or the stylus. Enlarging the screen where information is shown on personal device also made for better user experience, as a large chunk of the screen wasn’t covered by dialing pads, more information could be shown at once, and user experience designers worked closely with developers to make interfaces intuitive to use. It is thanks to the development of making digital interfaces intuitive to use that contributed to the huge impact of touchscreens. The combination of the two dramatically surpasses traditional methods of mobile interaction.

Now I’ll get back on track… The reason why I looked into making UI intuitive / create better designs is because I wanted to understand why touchscreens were so successful. A lot of it has got to do with the general experience. Both how the layout of the interfaces are, and how non-buggy the backend functionalities are. How can we do the same for gesture control? A big is ensuring successful feedback and feedforward of gesture controlled interfaces.

Feedback is important because it adds a tactile element to interactions. This is extremely crucial for gesture interactions because users will never actually touch and receive that “tactile” feel, that’s why user experience designers need to make sure to enhance that satisfying, delightful feeling of an interface responding to actions performed by users. Feedforward on the other hand hints when visual affordances exist, this is also extremely important because in a world where touchscreens rule over the tech industry, slight but maybe not direct hints/instructions need to be provided to guide users through the correct sequence of actions in order to avoid breakdowns.

For example, the hand tracking feature in the Occulus quest enables the utilization of hands as an input. If it’s activated, you’re able to see a visual outline of your two hands when they’re raised high enough. Seeing the outlines of hands delivers a sense of presence in the VR world, in comparison to outlines of the controllers used alongside of the VR headsets. The hands also deliver more natural interactions when you see that your hands are fully tracked and fingers articulated. In the VR world, hands can perform object interactions by simple hand gestures such as pinch, unpinch and pinch and hold.

In the example above here, you can see that feedback is used on the interface, as when your index finger and thumb are pinched together, it triggers the cursor pointer that’s in front of your hands three-dimensionally to shrink, the little pointer right in front of your index finger to also change shape, and the whole feedback of all of the above to turn more opaque than when a pinch it being performed. This is a clever use of feedback because it’s satisfying to see actions your hand performs to trigger immediate responses. Like mentioned above, it nurtures a more natural interaction and users feel even more immersed in the environment by using their hands. Pinches are used for clicking on windows / buttons that afford clicking, and is also used (pinch-drag-unpinch) for scrolling on a simple website UI. The whole interaction is extremely intuitive.

What I found a little more difficult would be the “back” button, or navigating backwards from a window/a folder. You are supposed to flip your palm face up, then pinch your thumb and index finger together for a few seconds. It it only when you flip your palm face up that a feedforward icon (occulus logo) shows up. I didn’t know how to navigate it until somebody beside me taught me how to. That was the only interaction I had a problem with. But users of Occulus Quest get an introductory slideshow when they first activate hand tracking. I however jumped straight in because my mentor had already set it up for me beforehand.

What was successful? Displaying live feed of your hands to help user feel the presence of hands in the technology. Little hints here and there that serve as both feedback and feedforward to nudge users on what interactions can be afforded.

How to create better designs for UI

NN Group created an extensive guideline for creating better designs for a user interface. When (potentially) I create prototypes for my proposal, I will strictly follow these guidelines. These include ensuring good and clear feedback, avoiding inconsistency, labeling icons (see, the problem I had with Occulus quest was that I didn’t understand the icon when it popped up when i had my palms up), avoiding hard-to-aquire-targets, avoiding meaningless information, and so on. Public screens will never offer as much information as phones, pcs and tablets do, so there isn’t the issue of overusing modals, junk-drawer menus and so on, which were also issues that bad designs have. I think the most important is to ensure proper feedback.

Existing touch-less technology

Before proceeding further into desk research, I thought it would be valuable to first start out by exploring what has been done or is currently being done in the area of touchless interaction. As I delve deeper, I found and read up about different innovations that have strived or remain to strive towards perfecting an alternative to the traditional touch screen capacitative interaction. I have sorted into categories that include: depth cameras that are commonly used to pick up gestures, gesture detection SDKs, and a solution that combines both of these technologies.

In this blog post I will quickly introduce and summarize innovations from these three categories.

Leap Motion

Leap motion is a device consisted of infrared cameras in its hardware that could build skeletal models of a user’s hands and fingers with precision in its software. It has been utilized in game design and other applications that can collaborate with leap motion’s gesture sensors. The product was revolutionary, but in the end it was still viewed as a conceptual “toy” rather than a practical device that can be used as controllers and replace buttons/touchscreens due to it’s failure to track precise movements. Instead, it captures too much which often leads to misunderstanding in the software, which is then unable to deliver 100% accuracy.

Myo Armband

The Myo armband is a gestural armband (duh) controller that translates arm movements into input mechanisms. It’s worn on the wrist and it translates electrical signals from your body into computer input. It’s yet another hardware/software combined product that can be utilized for gesture control. It has proved to be useful for the handicapped, for example. However, the problem with the product is that it has a significant learning period before the usage turns smooth, and as usual, the software is not perfect, it can have trouble recognizing some gestures (false positives etc).

Intel® RealSense camera

Intel®’s Realsense camera is a combination of hardware and software capabilities, it has cameras, a 1080p HD camera, an infrared camera and an infrared laser projector that allows the camera to behave like the human eye and sense depth / track human motion. It’s known for being able to create 3D depth maps of surroundings. Companies like Crunchfish AB for example, have taken the technology under its wings and implemented its gesture control SDK into the hardware.

Touchless A3D®

The Touchless A3D® software development kit by Crunchfish (yay!) is an SDK that allows users to control functions in their device by gestures. The SDK can be incorporated into AR and VR, which creates new possibilities for interaction. It requires no hardware changes since it uses a video stream from any device with an embedded camera.

Azure Kinect

Microsoft’s Azure Kinect consists of sensor SDKs, body tracking SDK, vision APIs and speech service SDK for its hardware product Azure Kinect DK. Like other cameras with body tracking SDK combos, the Kinect can be used to e.g track products, manage inventory, and be potentially used in cashier less stores (what I can focus on!).

Orbbec’s body-tracking SDK

Like Crunchfish’s Touchless A3D SDK, Orbbec’s body-tracking SDK enables computers to use 3D data from cameras to see and understand human bodies. With the SDK, developers can utilize it to create intuitive and innovative applications.

Manomotion 3D

Manomotion is another Swedish company that specializes in advanced software for tracking user’s hand and finger movements with great precision using a mobile camera. The software provides a framework for real-time 3D gestural analysis.

Tapstrap 2 by Tap

Tapstrap 2 is a mobile keyboard that you can carry around and strap on your hand. It functions as a bluetooth keyboard and mouse, and also gives you the ability to utilize air gestures. However, it takes learning to fully master the experience, as it’s as intuitive as learning the piano.

gestigon by Valeo

German startup Gestigon specialized in developing 3D image processing software for the cabin of a vehicle, they were later bought by Valeo. The goal of gestigon is to help people interact with technology in a natural way while in vehicles, so innovative solutions like this can enhance personal comfort and safety.

Tobii REX by Tobii

Tobii is a global leader in eye tracking and gaze interaction. Tobii REX is an eye tracker that can detect the presence, attention, and focus of the user. Its sensor technology makes it possible for a computer or another device to know where a person is looking. This allows for unique insights into human behavior and facilitates natural user interfaces in a broad range of devices.

Trying Occulus Quest for the first time

This week I also had the honor (!!) to try using the Occulus Quest, Occulus had recently implemented gesture control software into its newest update. It includes hand/gesture recognition. Simple interactions with a UI, like scrolling, tapping, going back a page and whatnot, have been made possible

. Though the use of gesture control is still quite limited here, it displays the first step to the impact gesture control can have in interaction with tech. In the VR/AR world, when the hands are raised up enough for the headset to see, instant feedback is implemented: the live feed of your hands. As it’s a virtual reality world, your hands wouldn’t be visible when raised up in older versions of Occulus. In general the actions were quite intuitive, if not, they were easy to learn. I for example didn’t know how to move up a page, but with the guidance of Thomas on the side, I understood immediately how to make the gesture and from then on didn’t require further assistance with controlling. (The problem with gestures… does it take too much learning?)

Mood-boarding

Typically, creating mood boards allows for collecting thoughts, ideas, color schemes and motifs into one place which helps any designer define a coherent design concept without losing sight of the bigger picture. However, mood boards can also be used early on in a research phase to collect thoughts for inspiration, even if they don’t necessarily represent or point towards a design concept yet.

Throughout this design process, mainly on my Miro whiteboard, I will be create mood boards to collect any visual material. They range from design inspiration taken by other projects, to images without much context that I believe can potentially fit into my design concept.

To kick-off, for example, I’ve collected images of public touchscreens in Sweden, and also generated a mood board filled with images of touch less interaction, ranging from gesture control, eye tracking, as well as scattered photos from science fiction movies that utilize touch less interaction / holograms.

Internship @ Crunchfish Kick-off!

Today marks the beginning of my Autumn semester at Crunchfish AB, Malmö, Sweden. I was lucky enough to have been granted an internship under the supervision of head UX Designer Thomas Rogewiec. The internship course is offered as part of the Malmö University Interaction Design programme curriculum, where students who wish to may collaborate with an established company to experience being at an actual workplace, working specifically for one case with specific goals. This experience is especially important as an internship helps gain insight into things that an education has a difficult time providing. In addition to that, internship paves a path for any student who wishes to get a glimpse of how an industry functions.

This blog series aims to help myself structure my work, from documentation, scrambles of thoughts, research findings, analysis, and final proposals will all be located here as I pave my way through creating a proposal. You (the reader) can also hopefully follow my work better on this platform as I write more concisely than I speak ;). With that being said, here comes the introduction to my brief.

The brief I will be working on for the span of this semester is to propose an interaction patterns for public screens that can be applied into the context of a world of increased importance of hand hygiene. The purpose and goal of this work is to investigate what has been done in the industry so far regarding touch-less interfaces and why the technology hasn’t revolutionized the industry yet.

Increased importance of hand hygiene ► increased need for touchless interaction with digital screens in public.

In addition to working on the brief specifically, my experience here at Crunchfish will hopefully help me gain a closer view on the industry and how UX designers can contribute. I hope by participating in this internship I can better understand how my skills can contribute in the real world.

To start off, I created a simple planner to structure my work. The timespan for each step is preliminary and is a rough schedule for how I think I will plan my work throughout the semester. The blue blobs represent school deliverables, and the yellow blobs represents the tasks I plan to do for my research. Green blobs to the left represent the goals, which I’ve structured to be completed chronologically. Some have been proposed by my supervisor and others myself (specified in bold), which are as follows:

Phase 1- Desktop Research

  • Explore what’s currently being done in this area
  • Explore what’s already been done in this area
  • Evaluate what’s lacking in this area, what more needs to be done?

Phase 2- Analysis

  • Evaluate above findings.
  • Conclude (Back up by research) which touch-less interactions are suitable for public screens

Phase 3- Ideation

  • Propose a set of interaction patterns that fits into our context of the importance of hygiene when interacting with public screens

Phase 4- Prototyping

  • With the patterns discovered, create some sort of a prototype that can be used for a user study (doesn’t have to be physical)

Phase 5- User-testing, validation of assumptions

  • Perform user testing to validate if proposed interaction would work

Phase 6- Aaaand, Repeat

  • Provide time for potential additional iterations
  • Iteration 2 accompanied by more user studies
  • If there’s an abundance in time, try prototyping something physical

Phase 7- Wrap-up

  • Document the process and findings

In addition to creating a preliminary planner, I’ve set up a Miro digital whiteboard to easily document by thoughts and research / analysis visually. Accompanied by that I set up an excel sheet for logging my times at the office, and also this journal 🙂