Final Essay: Movement in Design

Introduction

We use our physical bodies in our everyday life to maneuver, navigate, and communicate (Hansen & Morrison 2014). Many interactive systems, analogue or digital, involve the role of bodily movement as an embodied interaction. Dourish P once argued that the role of embodied interaction hinges on the relationship between action and meaning (Dourish 2001), which is extremely crucial for any form of design. Human cognition can be “performed” using moving bodies as an embodied nature. Therefore, we can argue that physical embodied interaction in the study of interaction design is often movement-based. All humans actions including cognition are embodied actions (Loke et al. 2007), that’s what makes it essential to the recent trend of movement design in interaction design.According to Loke & Anderson’s 2010 Study of Dancers, because “interactive technologies is movement-based”, it’s “continuing to become more embedded in our daily lives” and “our interactions with these technologies are becoming sought-after qualities” (Loke & Anderson, 2010, pp. 39). 

Drawing from pre-existing research on movement design as well as my personal experience with movement design in practice, this paper acts as an argumentative essay that proposes how movement can be articulated for the purposes of design, .

Defining Interaction

A major part of Interaction Design circulates around designing human interaction with digital products in addition to the study of how people interact with analogue products. When we look at the word interaction by and of itself, we can argue that it is associated to a situation or context where two or more objects or people communicate with each other or somehow “react” to one another (Cambridge Dictionary Online, 2008. Interaction, n.d.). Objects and objects can interact, human to human can also interact. But in the essence of interaction design, we focus on the interaction between people and objects. 

Let’s consider the word “react”. A “reaction” is a behaviour that can arguably be a form of interaction, that being said, reacting or “behaving” a certain way can be perceived as a movement in addition to a simple spark of thought in the head. What’s targeted here is an embodied movement. Researchers in Interaction design look at experiences felt and performed by users they’re designing for (Interaction Design Glossary), and a big part of experience involves the interaction between the user and a certain product. Targeting the moment of usage, some sort of movement is most certainly involved— some examples include the act of pulling down a projection screen involves the extension of the arm to reach the string and the tugging and contraction of the elbow to lock the screen in place, the act of entering an unmonitored 24/7 gym involves swiping your card then standing still for motion detectors to sense your presence and confirm that you’re not cheating the system by bringing in an extra person, etcetera. This can all be studied as a performance if we see the way we display ourselves as an act (Goffman 1959 & Hansen & Morrison 2014). We adjust our posture, our body language, and the scope of which we move our limbs to handle our weight and balance according to the context we’re involved in and what we are hoping to express (Hansen & Morrison 2014). 

The Technology of Identifying and Defining Movement

Movement-based interaction in design with technology is an emerging area that demands an improved focus on the human body and its capacity for movement (Loke et al. 2005). An abundance of researchers in design have contributed their perspectives on the relationships between embodied actions and technology design (Loke et al. 2007). People interact and move with, sometimes unwittingly, an increasing amount of technology— either with ones that involve directness and focus such as a computer or omnipresent technology that you inadvertently move through with such as the WiFi. This technology often influences how we move, or “react”. To study performance and movement, researchers have discovered and spawned many ways to explore this similar terrain and emerging field (Loke & Anderson 2010). For example, Loke et al. also used an established movement notation called Labanotation as a design tool for movement-based interaction using the human body as a direct input, this time using two different EyetoyTM interactive games (Loke et al., 2005). Loke and Anderson also utilized trained dancers’ moving bodies this time as an input into sensor technology to find ways of analyzing movement and discover useful consummations for the field of movement-based design and or interaction design (Loke & Anderson 2010). 

It is evident that movement design has been a crucial part of research in interaction design, and all the preexisting research done using some sort of a movement-based input that influences an output provides an invaluable groundwork for design of movement-based interaction. Loke & Anderson argues that this form of research can suggest “possible ways of describing, representing and experiencing movement for use in the design of video-based, motion-sensing technologies” (Loke & Anderson, 2010, pp. 39). 

Tracking and influencing of movement is becoming exponentially important as movement design is being investigated in a grander scale as part of interaction design. Designers today have access to movement data through sensors like our iPhone’s accelerometers and gyroscopes, or other gadgets such as the Kinenct. The truth however is that very few resources exist in interaction design to “meaningfully engage with full-body movement data (Hansen & Morrison 2014) compared to the resources provided for other studies in interaction design, as movement is often times abstracted and nuanced, and arguably also inconsistent. That is why the tracking and influencing of movement reveals its importance in this specific study. 

As students in the field of interaction design, we were granted the opportunity to dig further in the idea of finding ways to better articulate movement for the benefits of design by examining our everyday practice of movement. When discussing movement, we’re neglecting subtle movements such as swiping screens and pressing buttons, but performing an exploration of the entire body, hopefully finding potential for innovation of movement that can contribute to the exploration and creation of novel embodied interactions.

Movement design in Practice 

For Module III of the interactivity course, students were given the opportunity to explore machine learning code using movement as an input. Having worked with pre-trained models for visual recognition in Module II, this round involves the utilization of machine learning models and movement data registered from movement sensors in our smartphones with the goal of gathering knowledge of what machine learning is as well as to gather knowledge of the practicality and usability of machine learning as a material in design and exploring its usefulness in movement design or in the general scope of IxD, as well as in the broader context of AI. Ultimately, the aim is to explore new gestures for movement that can potentially take charge in the field of Interaction design. 

Getting inspired by Loke & Robertson (2010)’s notation for describing movement 

Though the project involves the handling of machine learning models and technology, the overall aim was to design interactive systems from explorations of movement as opposed to targeting the technologies and using it as a starting point (Loke & Anderson 2010). 

When we selected our specific gesture— the “YES!” gesture, which involves a swiping motion of an extended arm completed by the curling motion formed by the contraction of the elbow (see Figure 1), which was first taken out as an interesting gesture drawn from the action of pitching a baseball— we participated in first-hand exploration of the movement, we digitally recorded our gesture as performed in different contexts for transcription purposes, and later performed an in-depth analysis on our gesture inspired by Loke & Anderson’s notation for describing experiential qualities to fully grasp our selected movement (Loke & Anderson 2010).

Figure 1. “YES!” gesture in motion as illustrated by stick-figures.

Using videos as a way of referencing, as well as cropping frame by frame from our videos to have static images to reference to and to ease the process of jotting notes— we examined the process of pitching and in a later iteration: the “YES!” gesture. We performed analysis on our videos and static images using two unique perspectives—an experiential perspective, produced by first-person impressions of the gesture, think felt experience and bodily awareness, and an external, observational perspective that produced visual movement sequences (see Figure 2.). In other words, the mover and the observer perspective (Loke & Anderson 2010). This form of analysis granted us in generating a list of descriptions of the twisting body that we didn’t think we’d be able to come up with prior to this session of analysis. For example, “Hiding” and “Shielding” are descriptions we discovered to be synonymous with the “YES!” swipe, which was interesting because the “YES!” gesture is used to express extreme content and has obviously positive connotations. We believe this wouldn’t have been explored if we did not sit down and discuss the “felt” or “experiential” quality in detail after performing the action ourselves. 

Figure 2. Table of analysis for the dynamic unfolding of pitching (press to enlarge).

Challenges in movement— Vitality 

The vitality, or livelines and spirit, in our interaction with technology is a sought-after quality (Loke & Anderson 2010). However, it serves as one of the main challenges for movement design technologically. The computer has trouble sensing exuberance, or almost any other emotion, as all it sees are specific data points and coordinates that it has been pre-trained to recognize. As humans to other humans, we naturally experience people in terms of vitality as we can often predict and calculate their emotions based on the subtle nuances and authenticity  in their body language. However, that’s something the computer lacks to classify as movements are never “consistent”, “there is nothing rock solid in movement” (Hansen & Morrison 2014), people perform certain movements differently, with different expressions, dynamics, and nuances. 

We noticed this issue during our process as the machine learning code was rather inconsistent and erratic when it came to predicting the right gestures. Though both my partner and I were doing the gesture in what we think is the same way, the computer would predict the right gesture only when I performed it, since I was the one that recorded the gestures initially. In the first module even, was it obvious that machine learning may not be the most accurate in processing movement data. That being said, what became more crucial in this module three project was how we described the experiential quality of our movement, which proves to us that ultimately machines may not be the best at articulating movement, while linguistics can do a significantly better job. 

Conclusion—

How Movement can be Articulated for the purpose of Design 

Interaction design, as described as movement-based (Loke & Anderson 2010, Loke et al. 2007, Loke et al. 2005), raises new questions regarding the consequences and the influences the moving body can provide in HCI. I argue that movement plays an essential role in design because movement is often incorporated as an aspect of behaviour, and behaviour is closely investigated in part of user-experience design, because paying attention to this role of embodiment often articulates this association between action and meaning (Dourish 2001).  

In design, and perhaps even in fields involving computing,  movement is often and arguably best articulated through data. Data is important and relevant because it allows for informed decisions. It allows computer models to better understand movements and elect or fit an appropropriate responses. Data for movement is unique as it delineates both arithmetics as well as bodily awareness qualities. In design, data is vastly used to alleviate the stress in putting effort and required preciseness demanded on humans. Thanks to computer vision, artificial intelligence, and the fast-growing smart technologies, human-errors are greatly reduced. Computers do a great job at seeing what we can’t and is also quick at processing an abundance of importance within a blink in time. They take up these tasks that some humans can’t, or mitigates the task for us.

Throughout this module however, we have learned that data isn’t necessarily the only way of articulating movement. And there are definitely prevalent cons utilizing only data to articulate movement— seeing how our machine learning predictions have undergone many miscalculations in predictions due to the model’s inability to recognize subtle nuances. Yes, the computer may be good at categorizing many data points and predicting the meaning behind that certain gestures, or “what gesture” it is, but it cannot possibly (at least not currently), accurately predict what meaning is insinuated behind each gesture. That can only be “felt” and “experienced” by the person doing it first-hand, even the observer can make mistakes in assuming based on individual bias. What I’m saying here is that when we see a person doing the “YES!” swipe, we don’t know if this person is expressing joy because they won a lottery, or because their holding their arm close to their core to hide something from being visible to others. If the observer has a hard time doing that, the machine definitely cannot be 100% accurate in predicting that too.

Addressing the articulation of movement, I guess the most ideal form of articulating movement would be a combination of all three perspectives like Loke & Anderson has advocated for (Loke & Anderson 2010). All three perspectives, the mover (the felt quality, first person experiencing), the observer (the outside observing the visual movement and its sequences), and the machine (the computer, machine learning models, artificial intelligence, etc). Simply articulating a gesture or movement based on one perspective may result in inaccuracies. The computer cannot recognize the most subtle nuances and sense vitality, but the human may miss specific nuances as well by not being able to see the movement as a whole and quickly calculate a big picture or system in their head. 

Word-count:

2,299

References:

Dourish, P. (2001) Where the Action Is: The Foundations of Embodied Interaction. The MIT Press, Cambridge, Massachusetts.

Hansen, L. A., & Morrison, A. (2014). Materializing Movement-Designing for Movement-based Digital Interaction. ​International Journal of Design 8(1)​, Retrieved from http://www.ijdesign.org/index.php/IJDesign/article/view/1245/614.

Loke, L., Larssen, A. T., & Robertson, T. (2005). Labanotation for design of movement-based interaction. In ​Proc. of the 2nd Australasian Conf. on Interactive Entertainment​, pp. 113-120.

Loke, Lian & Larssen, Astrid & Robertson, Toni & Edwards, Jenny. (2007). Understanding movement for interaction design: Frameworks and approaches. Personal and Ubiquitous Computing. 11. pp. 691-701. 10.1007/s00779-006-0132-1.

Loke, L., & Robertson, T. (2010). Studies of dancers: Moving from experience to interaction design. ​International Journal of Design 4(2),​ pp. 39-54.

Background Referencing:

Interaction Design Glossary at http://www.interaction- design.org

Cambridge University Press. (2008). ​Cambridge online dictionary​, Cambridge Dictionary online.

MIII Show n’ Tell + Reflection

Show n’ Tell thoughts…

Overall I think we did pretty well by structuring our presentation with our guide of a powerpoint presentation . It was easier to just go through our process structurally / in order of what we have done and our thought processes as we underwent multiple iterations. Additional media like gifs were also used as a visual-aid for showcasing our process of analysis and really make clear to the audience of what gesture we chose and how we got there.

Post- Show n’ Tell…

After the presentation we were presented with some valuable advice from the teachers. In general as a statement, he recommended the students that this module was crucial for us to learn how to navigate through uncertain territories. Using the phone as a limitation, it adds challenge because phone safety becomes something that we need to be considerate of— for example, we don’t necessarily need to hold onto it, we can open up and explore bodily attachments as well and figure out ways to not necessarily press the phone screen at all times. And lastly he said something that I feel like would’ve been valuable for the whole process if I had heard it since the beginning, and that is that Machine Learning helps you articulate movement. Of course Machine Learning is never precise and it can often be difficult for it to notice and detect the nuances but articulation is a nice way to put it because the computer simple presents what it’s been “trained” to say.

We were complimented for being articulate about our selected movement. Additionally, we were told that we did a good job with taking the whole module in a systematic approach though Clint gave me a comment about my pitch / presentation that I didn’t necessarily need to present in a temporal matter and could’ve gone straight to the point jump straight into the gesture we found interest and not the process we took to get there. That’s something I’ll keep in mind in future presentations for sure. And also, like I mentioned in the previous entry, they said they were disappointed that we stuck with the same motion just performed by two people at the same time instead of amping up to a new level and creating a new movement instead. Again, I didn’t really get that we were supposed to reinvent our gesture and create something new that could be crucial to movement design and Machine learning but of course I only came to this conclusion after contemplating about it in hindsight and it wasn’t really intuitive at all. Now that I know this I do agree that we could’ve taken it much farther as we basically only took one sitting with our social interaction phase and we could’ve discovered more if we spent more time ideating.

Amping it up– Social Interactions (MIII)

As we wrapped up the first two weeks of this module and settling on one motion, it was time to be adventurous and find a potential for a novel interaction / direction that involves our “YES!” motion.

The final version of code presented for us (gestures-ml 1.06) supports multiple devices for training, and the predictions are said to be significantly better. Since we’re two with only one mobile device each, we experimented with two phones. We first thought about how we interpreted the individual “YES!” motion as being also a gesture of hiding and moving out of the way and ideated about new potentials for design of interesting interactive gestures. That didn’t go far because we quickly found out that when two people are doing this gesture at the same time we’re not “hiding” so much anymore, as opposed to that, the action resembles “fighting over something” and being “proud” when performed together at the same time.

We first talked about actions / activities required by two people and is performed simultaneously by both (while facing each other) that involves our “YES!” arm curling motion. We talked about sawing, as it requires two people to work contemporaneously, pushing and pulling the saw from both ends without one overpowering the other.

Related image
Sawing involves two people curling their arms independently (depending which side the saw moves toward) but the arms are moving in opposing directions.

Then thought about other activities that demand the same type of movement and watched tug of war videos. Tug of War a contest in which two teams pull at opposite ends of a rope until one drags the other over a central line (Oxford Dictionary). For this activity, when two sides are pulling with the same force, there is some sort of an equilibrium achieved and the teams end up in a “stand-still” state, however, when one overpowers the other, or when one team loses the same strength and forfeits then the other team breaks down and the pulling side wins. We were quickly drawn to this concept of equilibrium so we explored that movement a little, using long rubber bands often used in workout circuits.

We documented this movement and then discussed the felt qualities like our first iteration but unfortunately didn’t push any further with it (which we later discovered during show-and-tell was essentially the “goal” of this module— to explore and re-design new movements that can contribute to interaction design), instead, we tested the machine learning technology using this movement with the newly published code that can collect data from two devices and predict them both synchronously and even more accurately.

Felt Quality

  • Tension
  • Resistance
  • Bracing oneself, digging into the ground to prevent loss in balance
  • Involving whole body
  • Fighting for something
  • Different from one person protection, proudly claiming what’s yours
  • Actually grasping
  • Synchronization & Team effort
  • Pulling from side to side 

From pulling to throwing- as you can see, as opposed to the felt qualities we discussed in the individual “YES!” movement, the felt quality here doesn’t involve hiding as much as facing each other— similar to direct confrontation. IT more closely related to proudly claiming what’s yours and fighting for it instead of the previous conception from our previous gesture of shielding and shying away. We thought this was interesting as a discovery as we wouldn’t have recognized these without physically performing them in the mover perspective and analyzing in-depth. It’s thought-provoking how the exact same gesture performed by two people instead could inflict such a different feeling.

Machine Learning

Together, we tried four different gestures for recording. The gestures were to be performed simultaneously. We pressed down to record our individual gesture (that’s part of a whole) at the same time and released our thumbs at the same time. Like all the other iterations we’ve explored machine learning code with, we recorded each combined gesture thirty times each then tested whether the code could predict the right gesture and accurately differentiate the four. We experimented with the rubber band to feel the tension and resistance but during recording the rubber band was omitted.

The four combined gestures consisted of:

  1. Think sawing motion— one person has arm in “YES!” and the other has their arm instead— going towards the right, which we called sawRight
  2. Same as the above, instead going towards the opposite direction— other person does “YES!” and vice versa— we named it sawLeft
  3. Pull apart from each other— both perform the “YES!” simultaneously— motion of stretch rubber band (see below), we call it stretch
  4. Releasing the band in a dramatic way— both extending arm towards each other, almost like a high five— think from “YES!” to a high five— we named it push

Wrapping Up

This was essentially our wrap-up for this module. We’ve found an activity, identified an interesting movement or gesture out of that general motion of activity, analyzed it thoroughly, discussed about felt qualities, tinkered with the machine learning code, and wrapped up by finally giving social interactions a try with the final version of the model code.

We didn’t push as far as inventing a novel social gesture, but overall, we did our iterations strategically in a structural matter, which really guided us in conducting a solid wrap-up. To sum it up, AI as a field in technology is steadily growing and it’s extremely appreciable to have such an opportunity to tinker with pre-made machine learning code. In addition to that, kinaesthetics and the art of movement is essential to movement design and movement design is a fundamental study of interaction design. Though the instructions provided in So ultimately I think it was a valuable experience for us to be given this opportunity of combining these two concepts into one and perhaps helped some of us at least discover new opportunities spark new ideas for ourselves in the future (perhaps for a thesis research etc).

Analyzing the “YES!” motion (MIII)

The “YES!” motion was introduced in the last entry where I discussed how we discovered an interesting aspect from the mechanics of pitching.

The “YES!” motion consists of doing some sort of a swiping / swinging motion starting from your hand being at a distance from the core of one’s body and eventually pulling it in and contracting your elbows close to one’s body.

This motion is prevalent in pitching and especially for educating young and amateur pitchers. It’s important because when the arm collapses inward towards the body it’s aiding the swinging arm to not run into obstacles during the twist as well as making sure the balance remains at the center of the body to prevent a breakdown.

In addition to how important the motion is, we agreed that as an assistance movement, it’s often overlooked and when we watch a pitcher in a baseball game, you don’t really notice what’s going on with their non-dominant arm as the focus tends to be drawn towards the baseball itself. So it’s nice to find out how a seemingly non-significant motion has such a significant value.

Looking at our film documentation of ourselves pitching, we again did the same frame-to-frame breakdown and jotted down notes supporting the action.

Again, this was helpful for us to focus on the target area of our selected movement, and it’s helpful have this kind of reference to go back to whenever we need some sort of visual aid.

Drawn diagram of YES motion,

Following these analyses we proceeded with jotting down actions we believe resemble or consists of the YES motion, including:

  • Apple picking
  • Swiping a BIG screen
  • Bicep curls / pull-ups (many workout routines involving the arm)
  • “Come here!”— motion of inviting someone over
  • Lifting something from the ground
  • Rowing

Machine Learning

It was finally time to test with the machine learning technology!

We organized ourselves by collecting four sets of data, all “YES!” motions but from different directions:

  1. Swing from right side to center then “YES!”
  2. Swing from left side to center then “YES!”
  3. Swing from bottom to center then “YES!” (curling motion)
  4. Swing from top to center then “YES!” (apple picking motion)

Visual reference:

To test the technology, we recorded each gesture 30 times this time around, as discussed before, we thought about trying to train more data to hopefully enhance the accuracy. Then with improved code (after a series of panic and anxiety from errors) provided by the teachers, we put the data to test. We noticed again that the technology is able to distinguish the swings from the left and right sides— when the movement moves across the x-axis the technology can easily predict correctly, but again, when the x-axis remains unchanged but the coordinates are only changing on the y-axis, the prediction became iffy and inaccurate. When we tried it again making the curling and apple picking motions more dramatic and unique from each other the results were more or less completely accurate. We didn’t delve too much into this after the second iteration, however, it’s at least revealing to know that the technology works and the differentiation is somewhat successful.

Felt Quality

After the analysis and testing the technology for our “YES!” motion, we proceeded to discuss the psychological aspects of our movement. We’ve been paying a lot of attention the motor aspects of our movement and have somehow neglected the other yet almost more important aspect for this module, how it feels to perform this action.

The Felt Quality of a movement refers to the sensation or feeling in the body (Loke & Anderson 2010). Following the Felt Quality model again from Loke & Anderson’s paper for analyzing dancers and falling, we wrote down a list of what we felt when we performed the movement first-hand. It consists of:

  • Expressing extreme content
  • Becoming smaller
  • Hiding / Shielding / Crouching
  • Swinging
  • Regaining balance
  • Staying out of the way
  • Dragging self towards object
  • Keeping to oneself
  • Grasping (but not actually grasping anything)
  • Sense of completion
  • Beginning of a run, but actually staying still
  • Pulling in
  • “I want this, but I can’t have it”
  • Protecting, “my precious”
  • Shutting down
Image result for my precious

This analysis was significant as it revealed a diverse range of understandings of the process and experience of the motion of saying “YES!”, it revealed that on the surface, the motion may appear to only be used to express contentedness. But when you simply look at the body language and ignore the social aspects and your preconception about the action itself, you may find yourself viewing the motion in a complete different way and a completely different attitude may be conveyed— such as hiding and shielding as opposed to cheering for being first in a race— they’re so different yet a similar body language can be seen!

Breaking down Drew Storen’s pitching mechanics (MIII)

Reference video of analyzing the movement of pitching— Drew Storen

We kick-started today by watching Drew Storen’s pitching video through and through, then, I had the idea of screen shotting the video frame by frame to analyze the movement and placement of his joints and limps, tracking the movement by second. In addition to it being overall a easier way to analyze the movement, it’s also a nice reference to go back to for whenever we run into obstacles and need to recalibrate.

The first draft of our frame analysis is simply drawing dots for all the important joints that are involved in this full embodied movement, then connecting them with lines to represent the limbs. This was just a nice reference and visual aid for us to see what’s going on with the limbs. We then cropped these frames again and made a little stop motion gif to see it in motion again. We thought this was important for us to analyze to notice the subtle nuances in the movement and by drawing these dots were only paying attention to the joints and not whatever else is going on in the frames.

When analyzing, we also used this video as inspiration for analysis when it came to using more technical terms for analysis— such as what it entails when weight is shifted from leg to leg. This video gives a more informative description of the scientific aspect of Drew Storen’s pitching mechanics. We used this video’s text pop-ups as notes to jot down on our frame analysis.

Below is the second draft of our Drew Storen pitching mechanics break-down. Inspired by how Loke & Robertson (2010) analyzed the falling motion in their paper “Studies of Dancers”.

We went from frame to frame, writing down what’s going down, what techniques he’s using, what each change in movement represented, and so on. The red arrows represent major shifts in movement, such as the leg placement as well as the body twisting that’s essential for baseball pitching.

A breakdown of Drew Storen’s pitching mechanics (click image to enlarge)

Taking inspiration from Loke & Robertson’s Studies of Dancers: Moving from Experience to Interaction Design, we analyzed in an observer‘s perspective of how we think the motion of pitching (based on Drew Storen’s video) unraveled. We made a table to analyze the dynamic unfolding of the bodily movement of pitching. Though Loke & Robertson wrote about the motion of falling, however, in a first-person perspective— or the kinaesthetic perspective.

Loke & Robertson (2010)

This is our table analyzing throwing (click to enlarge):

I thought analyzing it this way and breaking the movement down into three sections / increments was helpful for us to articulate what we’re seeing on the screen. It’s like utilizing a new vocabulary for describing movement.

Following this analysis we went out again to document ourselves performing the action of throwing / pitching. Of course, we may not have the best and most accurate form, but we tried to mimic Drew Storen’s pitching technique.

We finally discovered something we had an interest in. From now on we will refer to it as the “YES!” motion or the “YES!” swipe. The “YES!” motion is often introduced by coaches for young pitchers to learn how to engage their bodies during the pitch instead of just letting the arm hang low when the ball is tossed. We first learned about the importance and the existence of this so-called “YES!” motion from this video↑↑

It’s referred to as a YES! motion because this specific gesture is commonly used as a way of expressing extreme content.

Related image

See the next entry for a more detailed analysis of the “YES!” motion.

I personally think systematically breaking down the action of pitching was helpful in aiding us gather our thoughts onto one piece of paper. Prior to doing this we were still feeling quite wishy-washy about choosing the movement of pitching but after this analysis we felt significantly more certain. I’ve never gone so in depth with a movement before so breaking it down frame by frame really inflicted the feeling of experiencing the motion first-hand. In addition to that seeing the visual aid also guided us into learning the right form and performing it ourselves the right way to feel the action in the mover’s perspective.

A little research + Coaching 21 October (MIII)

Reading

Prior to coaching we had done a little more research about sports in the field of HCI as well as about the baseball pitch itself.

Reading an excerpt from Mueller, F., and Agamanolis, S.’s book Design for Sport (2011) compiled by Roibás, A. C. and Emmanuel Stamatakis, E., they claim that the field of sports can contribute to the field of interaction because movement design is a big aspect of interaction design. Most sports contain lots of movement, as movement is an elementary component to being part involving in sport. Because movement is so rich in sport it can therefore contribute to the trend of interaction design.

In the field of sports and broadcasting we can already see preexisting prevalent computer applications in use, such as

  • Analysis software for performance enhancement used in team sports and individual sports to help coaches better articulate what can be improved on for upcoming games and races
  • Finding sports partners via social media
  • Mobile application for tracking exercise progress

The disadvantages of these systems would be that it relies on button presses, therefore, it’s important to facilitate a new outlook on supporting sports, such as introducing new types of interactive systems to enhance sports activities. And machine learning predictions could perhaps be used more prevalently!

Nylander et al.’s 2015 article HCI and Sports also urges to focus on novel viewpoints on interaction design in sports. One interesting takeaway I got from reading this article was that Sociality is a big part in sport (with the interaction between athletes, coaches, and the audience) and technology provides a means of enriching social aspects.

In addition to all this we asked some questions regarding the outcome we’ve received from testing the machine learning technology, such as why bowling* was confused with cowboy. We’ve noticed that when two or more gestures share the same point on the x-axis (same vertical position) and the nuances are too subtle, the system would present a problem with differentiation.

*bowling was also a motion we tested for pitching as a self-invented pitch, think Wii Sports Bowling movement, and cowboy I’ve introduced before

So we started to question which coordinates the computer/phone are taking. Felix tested only the bowling and cowboy and the computer was able to differentiate these when the data only contained these two gestures, so the technology probably got confused by the abundance of data and added moves. After this we reflected a bit on how we could improve on this training if we had more time to go in-depth with this, and it’s to adjust the movement a little more, making the nuances a little greater.

Finally, I read a little more about the mechanics of baseball. Momentum is important in a pitch because it’s important to throw your arm as far as back as possible, transferring weight to the back foot then shifting your weight forward, propelling the ball away. Balance is also important when it comes to pitching because the legs and body present a firm base, and the body should be behind the line of the ball.

Coaching

During the Coaching session with Jens we introduced our interest in baseball, and how we’ve grappled with the machine learning technology. He says the module isn’t just about recognizing gestures and testing if the technology can 100% predict correct gestures. It’s about find an interesting gesture to explore, and answer why it’s interesting, why it’s valid to explore? He recommends for us to document movement of video and analyze movements in a kinaesthetic perspective in addition to an observer perspective (which you can perform by analyzing videos). Lastly he reminded us that we don’t have to be this specific with baseball pitches/throwing just yet and we can potentially diverge a little and experiment with richer movements.

After this coaching session we looked away from throwing and side-tracked a little, talking about the motion of kicking. I will keep this short because this change in direction was not relevant in the overall scope of our project. We thought kicking would be interesting because it was similar to throwing (propelling a ball/object towards a direction) but in our opinion a little “richer”— the orientation of foot needs to be properly adjusted to the right angle to propel a nice and arched ball. Also we discussed how this specific motion of kicking is prevalent in other daily motions like closing doors and expressing anger.

We took a few videos just to see if we find anything interesting but sadly didn’t, we did coaching the next day as well to discuss if there’s a potential in changing direction and that’s when Clint recommended us to not necessarily completely scratch the idea of throwing just yet as we can encounter the same issues with kicking as well. Instead, we could look deeper into the different types of throwing and unpack the action. When we look deeper into it we have to find an interesting movement to analyze and use and maybe some research can be done.

We quickly jumped back to the action of throwing after this, sticking with our gut feeling and initial idea etc. What I think we should do now is really sit down and break down the visuals of the action of throwing, by drawing out the join and limb displacement and see if we can find anything particularly interesting. Felix found a slow motion video of a professional pitcher pitching a ball, the video is informative because the background is dark and we can really see everything in action in slow motion. Next time we meet we’ll be analyzing this video.

Ideation Friday + Baseball!

After a series of ideation starting from listing a couple of specific activities consisted of rich gestures and concrete movements, we started leaning towards sports— ball sports, in particular. The most gestural ball sport you can think of is undoubtedly baseball. From the third base coach observing the overall scene and letting the catcher know, to the catcher signaling to the pitcher what type of ball he should pitch, to the umpire doing umpire-things…and so on. Not to mention that sports spectators often times have endless amounts of gestures to cheer on their favorite team.

When speaking of baseball, what struck out immediately if of course the pitching/throwing movement. When pitching a baseball, in addition to simply throwing the ball towards the home plate, a lot of thought has to be put into each pitch in order to support throwing a variety of pitches. Depending on your hand position, wrist position and angle of your arm, each ball will have a slightly different velocity, trajectory, and overall movement. Pitching the right ball at the right moment because it confuses the batter in various ways, gets batters and baserunners out, which contributes to the overall succession of the game.

Image result for different baseball pitches

Following this quick ideation we immediately moved on to exploring and tinkering with the machine learning code. We first recorded gestures in the terminal for a normal fastball, curveball, screwball, and throws we invented such as cowboy (elbow up, rotating your arm twice before throwing the ball), and frisbee (throwing the ball like you throw a frisbee). We recorded each gesture20 times, training them, then predicting them to see if machine learning can differentiate these different pitches. On my end I tried the screwball (inverted curveball), the frisbee, and the cowboy, and because these gestures are vastly different, machine learning was able to predict these motions fairly well.

Machine learning prediction

On the other hand, Felix trained more similar range of motion such as the curveball and fastball, and that confused the system a little more, machine learning would often confuse all of these gestures as either just the curveball or just the fastball. It’s true that the motion is rather similar, though one has more wrist angle manipulation to achieve the “spinning” and the other with a quicker pitching velocity. We speculate that maybe it’s essential to make sure that when recording, we have to be careful with emphasizing the “spin” as well as the quicker motion for the fastball (less coordinate points), purposely making both pitches extremely unique from one another.

It’s extremely refreshing to see how powerful a simple machine learning code can be, even when it’s as simple as predicting gestures. There’s always a satisfying gush or a “I’m impressed” feeling whenever the right gesture is predicted. I look forward to working more with this and we talked about changing the number of lines read in the training and prediction code and perhaps recording and training ten additional lines to see if the predictions can be improved.

Lecture, followed by some drawing and some YMCA…

The body is the ultimate test of successful engagement with interactive systems as it adds more liveliness, vitality and pleasure into the design process.

Lecture

Jens’s lecture consists of breaking down the two papers by Loke & Robertson— Study of Dancers and Labanotation for Design of Movement- Based Interaction. The main point of the dancer is acknowledging the three perspectives of exploring movement. Before introducing the three perspectives, the concept of Kinaesthetics is introduced. Kinaesthetics, drawing from greek language origins, essentially means the aesthetics of movement. It relates to a person’s awareness of the position and movement of the parts of the body by means of sensory domains (Oxford).

The Mover belongs to the kinaesthetic perspective— it belongs to the sensory proprioceptors. It’s the first person— the one doing the movement’s narrative. It’s the movement itself and tend to be more “difficult” to articulate.

The Observer, to be simply put, is the person observing the movement. The observer does not feel what the mover is feeling. However, based on previous knowledge, the observer can make their own interpretation of what the action entails.

The Machine perceives the movement in the observer’s perspective, but it’s different. It views the body via coordinates or inputs— or whatever you’ve taught it to learn from the action.

Image result for hawkeye image tennis

The first image that came to mind when discussing the three perspectives was a tennis match, or basically any sports game consists of players and an audience. The players themselves would belong to the mover perspective, the observer being the audience, and the machine would be the hawk-eye system that tracks the trajectory of the balls played and displays a profile of its statistically most likely path as a moving image, or collects data of the player’s movements.

The second paper about Labanotation was basically a notation that Loke & Robertson have created to reproduce movements. They have created a new set of vocabulayr to help us pay attention to specific details in movement, or to better identify movement and be able to retell it. This will definitely come in handy when we choose our “gesture” to analyze.

He ends the presentation with the usual “process” graph— and he emphasized that for this third module we will be tinkering more than implementing (mentioned in previous entry).

Again, M3 is all about challenging convention about gestures and seeing how machine learning can be implemented in any way.

Set-up

After going through a series of issues for maybe three days just trying to get the installation to work, and three versions of the code later, we finally got the program to work avoiding any errors.

For the tinkering phase of our design process, we’re supposed to really delve into and explore the capabilities of our machine learning code. The code is capable of recording gestures based on the data coordinates drawn from the movement sensors (accelerometer and gyroscope) in our smartphones.

For the entirety of this afternoon we devoted into recording different gestures 20 times, training them, and seeing if the program can further predict what gestures we’re doing.

We noticed that shapes that are vastly different tend to not confuse the computer so much– for example, differentiating between a circle, a triangle, and the movement of the uppercut (yes, I really did this). The computer was able to differentiate these very well and predict each gesture at ease, however, once i incorporated more complexity, such as incorporating a square gesture and a basketball gesture, the computer got slightly bit confused, in the beginning, it would confuse the basketball gesture with the uppercut, the square with the triangle, but eventually, it just predicted everything as a circle. We don’t really know what this entails, and how to improve the prediction, which is perhaps training an extra 10 more recordings, but we will proceed that in a later on iteration.

Then we removed all our present data and played with something new, instead of shapes, I wanted to see if the computer could recognize alphabets that I’m trying to spell out. Well, first I thought of the YMCA dance, because it’s an extremely “gestural” dance, but i figured just doing the dance the way it is isn’t so “rich” in movement, and I was certain the program wouldn’t be able to decode what alphabet I’m gesturing just by “pointing” in different directions (when you observe the YMCA dance, you’re angling your arms a certain way for each alphabet to spell it out, not so rich in movement…).

So I decided to write the alphabets out (like, literally WRITE it), and see what the computer predicts because it would be pretty cool to spell things on the computer just by using gestures etc. Unfortunately it failed miserably as practically none of the right alphabets were predicted. Again, we didn’t really delve into why this is happening other than acknowledging that the code is obviously not perfect and cannot be 100% accurate.

Reflection:

Machine learning, a subset of AI, we can train a certain code with enough data for it to be able to predict specific movements without explicit instructions. But what does this all mean? After the tinkering we’ve been doing, it’s time to step back and ideate what gestures we find particularly interesting and analyzing that, finding design opportunities perhaps, and how we can utilize this machine learning technology to a greater extent.

This all sounds exciting but I guess the limitation of the module is the phone itself. At the end of the day, the phone is quite chunky and it’s one single object, we cannot attach it on multiple joints for it to recognize “richer” gestures, we cannot attach it all five of our fingers so it knows how many fingers are being held up. Therefore, as a prediction, I feel like this module could potentially be quite challenging. I guess we shouldn’t be thinking about implementation just yet and focus on finding an interesting gesture as part of an activity to kick-off.

MIII Kick-off!

MIII is all about working with preexisting machine learning code and taking a dip into the realm of AI!

My partner for this module is Felix!

Similar to MI, we worked with pre-trained models for visual recognition (posenet). In this module, we will be building our own machine learning models based on the data we register into the program using movement sensors in our smartphones! The goal of this module will be to gather knowledge of what Machine learning is and to explore the possible usefulness and difficulties of Machine learning as a material of design for IxD. In general, we’re getting an understanding of machine learning in the broader context of AI.

Design process:

Starting with tinkering (which Jens said we’ll be doing more for this Module compared to previous Module, since there will be more focus on making the technology work so tinkering > implementation), ideating / sketching as usual, implementing, then experiencing and reflecting, the usual for a design process.

MII Wrap-Up

The main feedback we received from the critique after our showcase is mainly what value we’re getting from this feedback we’re trying to express. In our minds, the tightening sensation was originally meant to emit a suffocating feeling to reflect the levels of air pollution to the body, as a way to inform the user about the “state” of the atmosphere, which we later diverged into being information about anything basically. Like the suffocation could inform the user about anything you would like to be: the amount of homework you have on your plate, how much food you have left in your fridge, and so on.

When Jens posing as our lab-rat experienced with our prototype, he describes the feeling as being rather repetitive, as the servo kept doing the same thing, same motion. Though dimensions were added in terms of speed and amplitude, the motion is still rather repetitive, which we agree with.

He recommended (if there were more opportunities with this prototype / idea) to add more variations, make it more expressive (see Martin & Michael’s prototype), because the movement of the choker became a little monotone over time despite the initial reaction towards the choker being rather “shocking” and a “new sensation”.

As a general reaction, I would like to discuss how we felt when experiencing our prototype and how we manipulated it over time, moved on, etcetera. In our minds, having such a prototype that attaches itself to any spot on the body could be relevant in people’s lives in many ways, like I’ve discussed in other posts. In addition to our initial air pollution idea, other values can factor in and the choker can be about ANYTHING, basically, as we concluded after coaching with Clint. But what we could’ve done better is creating more richness in the experience, such as really ideate over what sort of dimensions we’re doing the experiment in. It’s important to differentiate between technical dimensions and experiential qualities. This Module was all about new experiences after all, so I agree that we could’ve pushed this project farther. Jens and Clint advised adding more color for example, such as sensations of pinching to contrast with our pre-existing tightening sensation. As designers designing new experiences, it’s important to come up with our own experiential vocabulary, really go in depth with how everything feels when you’re interacting with a certain artifact, step back developing these feelings and seeing if there’s an opportunity to create more richness. This factor of really exploring the experiential qualities more during the design process will be something I’ll be paying more attention to in MIII.

Other General Feedback (Maybe incorporate in final paper):

  • Way of informing people can change a lot, depending on how the project is introduced
  • Worth reflecting conscious turns you took in design process, starting from concrete then diverging, opening to other kinds of phenomenon
  • Reflect constantly on whether it’s worth to focus (narrowing down) or opening the door again
  • Always reflect on how you went about it differently from last time and how you grappled with it
  • Tell a story, what are the parameters you’re experimenting with, can you articulate the experience, why move on, manipulated what, what dimensions
  • What are you experimenting with in terms of experience