MIII Show n’ Tell + Reflection

Show n’ Tell thoughts…

Overall I think we did pretty well by structuring our presentation with our guide of a powerpoint presentation . It was easier to just go through our process structurally / in order of what we have done and our thought processes as we underwent multiple iterations. Additional media like gifs were also used as a visual-aid for showcasing our process of analysis and really make clear to the audience of what gesture we chose and how we got there.

Post- Show n’ Tell…

After the presentation we were presented with some valuable advice from the teachers. In general as a statement, he recommended the students that this module was crucial for us to learn how to navigate through uncertain territories. Using the phone as a limitation, it adds challenge because phone safety becomes something that we need to be considerate of— for example, we don’t necessarily need to hold onto it, we can open up and explore bodily attachments as well and figure out ways to not necessarily press the phone screen at all times. And lastly he said something that I feel like would’ve been valuable for the whole process if I had heard it since the beginning, and that is that Machine Learning helps you articulate movement. Of course Machine Learning is never precise and it can often be difficult for it to notice and detect the nuances but articulation is a nice way to put it because the computer simple presents what it’s been “trained” to say.

We were complimented for being articulate about our selected movement. Additionally, we were told that we did a good job with taking the whole module in a systematic approach though Clint gave me a comment about my pitch / presentation that I didn’t necessarily need to present in a temporal matter and could’ve gone straight to the point jump straight into the gesture we found interest and not the process we took to get there. That’s something I’ll keep in mind in future presentations for sure. And also, like I mentioned in the previous entry, they said they were disappointed that we stuck with the same motion just performed by two people at the same time instead of amping up to a new level and creating a new movement instead. Again, I didn’t really get that we were supposed to reinvent our gesture and create something new that could be crucial to movement design and Machine learning but of course I only came to this conclusion after contemplating about it in hindsight and it wasn’t really intuitive at all. Now that I know this I do agree that we could’ve taken it much farther as we basically only took one sitting with our social interaction phase and we could’ve discovered more if we spent more time ideating.

Amping it up– Social Interactions (MIII)

As we wrapped up the first two weeks of this module and settling on one motion, it was time to be adventurous and find a potential for a novel interaction / direction that involves our “YES!” motion.

The final version of code presented for us (gestures-ml 1.06) supports multiple devices for training, and the predictions are said to be significantly better. Since we’re two with only one mobile device each, we experimented with two phones. We first thought about how we interpreted the individual “YES!” motion as being also a gesture of hiding and moving out of the way and ideated about new potentials for design of interesting interactive gestures. That didn’t go far because we quickly found out that when two people are doing this gesture at the same time we’re not “hiding” so much anymore, as opposed to that, the action resembles “fighting over something” and being “proud” when performed together at the same time.

We first talked about actions / activities required by two people and is performed simultaneously by both (while facing each other) that involves our “YES!” arm curling motion. We talked about sawing, as it requires two people to work contemporaneously, pushing and pulling the saw from both ends without one overpowering the other.

Related image
Sawing involves two people curling their arms independently (depending which side the saw moves toward) but the arms are moving in opposing directions.

Then thought about other activities that demand the same type of movement and watched tug of war videos. Tug of War a contest in which two teams pull at opposite ends of a rope until one drags the other over a central line (Oxford Dictionary). For this activity, when two sides are pulling with the same force, there is some sort of an equilibrium achieved and the teams end up in a “stand-still” state, however, when one overpowers the other, or when one team loses the same strength and forfeits then the other team breaks down and the pulling side wins. We were quickly drawn to this concept of equilibrium so we explored that movement a little, using long rubber bands often used in workout circuits.

We documented this movement and then discussed the felt qualities like our first iteration but unfortunately didn’t push any further with it (which we later discovered during show-and-tell was essentially the “goal” of this module— to explore and re-design new movements that can contribute to interaction design), instead, we tested the machine learning technology using this movement with the newly published code that can collect data from two devices and predict them both synchronously and even more accurately.

Felt Quality

  • Tension
  • Resistance
  • Bracing oneself, digging into the ground to prevent loss in balance
  • Involving whole body
  • Fighting for something
  • Different from one person protection, proudly claiming what’s yours
  • Actually grasping
  • Synchronization & Team effort
  • Pulling from side to side 

From pulling to throwing- as you can see, as opposed to the felt qualities we discussed in the individual “YES!” movement, the felt quality here doesn’t involve hiding as much as facing each other— similar to direct confrontation. IT more closely related to proudly claiming what’s yours and fighting for it instead of the previous conception from our previous gesture of shielding and shying away. We thought this was interesting as a discovery as we wouldn’t have recognized these without physically performing them in the mover perspective and analyzing in-depth. It’s thought-provoking how the exact same gesture performed by two people instead could inflict such a different feeling.

Machine Learning

Together, we tried four different gestures for recording. The gestures were to be performed simultaneously. We pressed down to record our individual gesture (that’s part of a whole) at the same time and released our thumbs at the same time. Like all the other iterations we’ve explored machine learning code with, we recorded each combined gesture thirty times each then tested whether the code could predict the right gesture and accurately differentiate the four. We experimented with the rubber band to feel the tension and resistance but during recording the rubber band was omitted.

The four combined gestures consisted of:

  1. Think sawing motion— one person has arm in “YES!” and the other has their arm instead— going towards the right, which we called sawRight
  2. Same as the above, instead going towards the opposite direction— other person does “YES!” and vice versa— we named it sawLeft
  3. Pull apart from each other— both perform the “YES!” simultaneously— motion of stretch rubber band (see below), we call it stretch
  4. Releasing the band in a dramatic way— both extending arm towards each other, almost like a high five— think from “YES!” to a high five— we named it push

Wrapping Up

This was essentially our wrap-up for this module. We’ve found an activity, identified an interesting movement or gesture out of that general motion of activity, analyzed it thoroughly, discussed about felt qualities, tinkered with the machine learning code, and wrapped up by finally giving social interactions a try with the final version of the model code.

We didn’t push as far as inventing a novel social gesture, but overall, we did our iterations strategically in a structural matter, which really guided us in conducting a solid wrap-up. To sum it up, AI as a field in technology is steadily growing and it’s extremely appreciable to have such an opportunity to tinker with pre-made machine learning code. In addition to that, kinaesthetics and the art of movement is essential to movement design and movement design is a fundamental study of interaction design. Though the instructions provided in So ultimately I think it was a valuable experience for us to be given this opportunity of combining these two concepts into one and perhaps helped some of us at least discover new opportunities spark new ideas for ourselves in the future (perhaps for a thesis research etc).

Analyzing the “YES!” motion (MIII)

The “YES!” motion was introduced in the last entry where I discussed how we discovered an interesting aspect from the mechanics of pitching.

The “YES!” motion consists of doing some sort of a swiping / swinging motion starting from your hand being at a distance from the core of one’s body and eventually pulling it in and contracting your elbows close to one’s body.

This motion is prevalent in pitching and especially for educating young and amateur pitchers. It’s important because when the arm collapses inward towards the body it’s aiding the swinging arm to not run into obstacles during the twist as well as making sure the balance remains at the center of the body to prevent a breakdown.

In addition to how important the motion is, we agreed that as an assistance movement, it’s often overlooked and when we watch a pitcher in a baseball game, you don’t really notice what’s going on with their non-dominant arm as the focus tends to be drawn towards the baseball itself. So it’s nice to find out how a seemingly non-significant motion has such a significant value.

Looking at our film documentation of ourselves pitching, we again did the same frame-to-frame breakdown and jotted down notes supporting the action.

Again, this was helpful for us to focus on the target area of our selected movement, and it’s helpful have this kind of reference to go back to whenever we need some sort of visual aid.

Drawn diagram of YES motion,

Following these analyses we proceeded with jotting down actions we believe resemble or consists of the YES motion, including:

  • Apple picking
  • Swiping a BIG screen
  • Bicep curls / pull-ups (many workout routines involving the arm)
  • “Come here!”— motion of inviting someone over
  • Lifting something from the ground
  • Rowing

Machine Learning

It was finally time to test with the machine learning technology!

We organized ourselves by collecting four sets of data, all “YES!” motions but from different directions:

  1. Swing from right side to center then “YES!”
  2. Swing from left side to center then “YES!”
  3. Swing from bottom to center then “YES!” (curling motion)
  4. Swing from top to center then “YES!” (apple picking motion)

Visual reference:

To test the technology, we recorded each gesture 30 times this time around, as discussed before, we thought about trying to train more data to hopefully enhance the accuracy. Then with improved code (after a series of panic and anxiety from errors) provided by the teachers, we put the data to test. We noticed again that the technology is able to distinguish the swings from the left and right sides— when the movement moves across the x-axis the technology can easily predict correctly, but again, when the x-axis remains unchanged but the coordinates are only changing on the y-axis, the prediction became iffy and inaccurate. When we tried it again making the curling and apple picking motions more dramatic and unique from each other the results were more or less completely accurate. We didn’t delve too much into this after the second iteration, however, it’s at least revealing to know that the technology works and the differentiation is somewhat successful.

Felt Quality

After the analysis and testing the technology for our “YES!” motion, we proceeded to discuss the psychological aspects of our movement. We’ve been paying a lot of attention the motor aspects of our movement and have somehow neglected the other yet almost more important aspect for this module, how it feels to perform this action.

The Felt Quality of a movement refers to the sensation or feeling in the body (Loke & Anderson 2010). Following the Felt Quality model again from Loke & Anderson’s paper for analyzing dancers and falling, we wrote down a list of what we felt when we performed the movement first-hand. It consists of:

  • Expressing extreme content
  • Becoming smaller
  • Hiding / Shielding / Crouching
  • Swinging
  • Regaining balance
  • Staying out of the way
  • Dragging self towards object
  • Keeping to oneself
  • Grasping (but not actually grasping anything)
  • Sense of completion
  • Beginning of a run, but actually staying still
  • Pulling in
  • “I want this, but I can’t have it”
  • Protecting, “my precious”
  • Shutting down
Image result for my precious

This analysis was significant as it revealed a diverse range of understandings of the process and experience of the motion of saying “YES!”, it revealed that on the surface, the motion may appear to only be used to express contentedness. But when you simply look at the body language and ignore the social aspects and your preconception about the action itself, you may find yourself viewing the motion in a complete different way and a completely different attitude may be conveyed— such as hiding and shielding as opposed to cheering for being first in a race— they’re so different yet a similar body language can be seen!

Breaking down Drew Storen’s pitching mechanics (MIII)

Reference video of analyzing the movement of pitching— Drew Storen

We kick-started today by watching Drew Storen’s pitching video through and through, then, I had the idea of screen shotting the video frame by frame to analyze the movement and placement of his joints and limps, tracking the movement by second. In addition to it being overall a easier way to analyze the movement, it’s also a nice reference to go back to for whenever we run into obstacles and need to recalibrate.

The first draft of our frame analysis is simply drawing dots for all the important joints that are involved in this full embodied movement, then connecting them with lines to represent the limbs. This was just a nice reference and visual aid for us to see what’s going on with the limbs. We then cropped these frames again and made a little stop motion gif to see it in motion again. We thought this was important for us to analyze to notice the subtle nuances in the movement and by drawing these dots were only paying attention to the joints and not whatever else is going on in the frames.

When analyzing, we also used this video as inspiration for analysis when it came to using more technical terms for analysis— such as what it entails when weight is shifted from leg to leg. This video gives a more informative description of the scientific aspect of Drew Storen’s pitching mechanics. We used this video’s text pop-ups as notes to jot down on our frame analysis.

Below is the second draft of our Drew Storen pitching mechanics break-down. Inspired by how Loke & Robertson (2010) analyzed the falling motion in their paper “Studies of Dancers”.

We went from frame to frame, writing down what’s going down, what techniques he’s using, what each change in movement represented, and so on. The red arrows represent major shifts in movement, such as the leg placement as well as the body twisting that’s essential for baseball pitching.

A breakdown of Drew Storen’s pitching mechanics (click image to enlarge)

Taking inspiration from Loke & Robertson’s Studies of Dancers: Moving from Experience to Interaction Design, we analyzed in an observer‘s perspective of how we think the motion of pitching (based on Drew Storen’s video) unraveled. We made a table to analyze the dynamic unfolding of the bodily movement of pitching. Though Loke & Robertson wrote about the motion of falling, however, in a first-person perspective— or the kinaesthetic perspective.

Loke & Robertson (2010)

This is our table analyzing throwing (click to enlarge):

I thought analyzing it this way and breaking the movement down into three sections / increments was helpful for us to articulate what we’re seeing on the screen. It’s like utilizing a new vocabulary for describing movement.

Following this analysis we went out again to document ourselves performing the action of throwing / pitching. Of course, we may not have the best and most accurate form, but we tried to mimic Drew Storen’s pitching technique.

We finally discovered something we had an interest in. From now on we will refer to it as the “YES!” motion or the “YES!” swipe. The “YES!” motion is often introduced by coaches for young pitchers to learn how to engage their bodies during the pitch instead of just letting the arm hang low when the ball is tossed. We first learned about the importance and the existence of this so-called “YES!” motion from this video↑↑

It’s referred to as a YES! motion because this specific gesture is commonly used as a way of expressing extreme content.

Related image

See the next entry for a more detailed analysis of the “YES!” motion.

I personally think systematically breaking down the action of pitching was helpful in aiding us gather our thoughts onto one piece of paper. Prior to doing this we were still feeling quite wishy-washy about choosing the movement of pitching but after this analysis we felt significantly more certain. I’ve never gone so in depth with a movement before so breaking it down frame by frame really inflicted the feeling of experiencing the motion first-hand. In addition to that seeing the visual aid also guided us into learning the right form and performing it ourselves the right way to feel the action in the mover’s perspective.

A little research + Coaching 21 October (MIII)

Reading

Prior to coaching we had done a little more research about sports in the field of HCI as well as about the baseball pitch itself.

Reading an excerpt from Mueller, F., and Agamanolis, S.’s book Design for Sport (2011) compiled by Roibás, A. C. and Emmanuel Stamatakis, E., they claim that the field of sports can contribute to the field of interaction because movement design is a big aspect of interaction design. Most sports contain lots of movement, as movement is an elementary component to being part involving in sport. Because movement is so rich in sport it can therefore contribute to the trend of interaction design.

In the field of sports and broadcasting we can already see preexisting prevalent computer applications in use, such as

  • Analysis software for performance enhancement used in team sports and individual sports to help coaches better articulate what can be improved on for upcoming games and races
  • Finding sports partners via social media
  • Mobile application for tracking exercise progress

The disadvantages of these systems would be that it relies on button presses, therefore, it’s important to facilitate a new outlook on supporting sports, such as introducing new types of interactive systems to enhance sports activities. And machine learning predictions could perhaps be used more prevalently!

Nylander et al.’s 2015 article HCI and Sports also urges to focus on novel viewpoints on interaction design in sports. One interesting takeaway I got from reading this article was that Sociality is a big part in sport (with the interaction between athletes, coaches, and the audience) and technology provides a means of enriching social aspects.

In addition to all this we asked some questions regarding the outcome we’ve received from testing the machine learning technology, such as why bowling* was confused with cowboy. We’ve noticed that when two or more gestures share the same point on the x-axis (same vertical position) and the nuances are too subtle, the system would present a problem with differentiation.

*bowling was also a motion we tested for pitching as a self-invented pitch, think Wii Sports Bowling movement, and cowboy I’ve introduced before

So we started to question which coordinates the computer/phone are taking. Felix tested only the bowling and cowboy and the computer was able to differentiate these when the data only contained these two gestures, so the technology probably got confused by the abundance of data and added moves. After this we reflected a bit on how we could improve on this training if we had more time to go in-depth with this, and it’s to adjust the movement a little more, making the nuances a little greater.

Finally, I read a little more about the mechanics of baseball. Momentum is important in a pitch because it’s important to throw your arm as far as back as possible, transferring weight to the back foot then shifting your weight forward, propelling the ball away. Balance is also important when it comes to pitching because the legs and body present a firm base, and the body should be behind the line of the ball.

Coaching

During the Coaching session with Jens we introduced our interest in baseball, and how we’ve grappled with the machine learning technology. He says the module isn’t just about recognizing gestures and testing if the technology can 100% predict correct gestures. It’s about find an interesting gesture to explore, and answer why it’s interesting, why it’s valid to explore? He recommends for us to document movement of video and analyze movements in a kinaesthetic perspective in addition to an observer perspective (which you can perform by analyzing videos). Lastly he reminded us that we don’t have to be this specific with baseball pitches/throwing just yet and we can potentially diverge a little and experiment with richer movements.

After this coaching session we looked away from throwing and side-tracked a little, talking about the motion of kicking. I will keep this short because this change in direction was not relevant in the overall scope of our project. We thought kicking would be interesting because it was similar to throwing (propelling a ball/object towards a direction) but in our opinion a little “richer”— the orientation of foot needs to be properly adjusted to the right angle to propel a nice and arched ball. Also we discussed how this specific motion of kicking is prevalent in other daily motions like closing doors and expressing anger.

We took a few videos just to see if we find anything interesting but sadly didn’t, we did coaching the next day as well to discuss if there’s a potential in changing direction and that’s when Clint recommended us to not necessarily completely scratch the idea of throwing just yet as we can encounter the same issues with kicking as well. Instead, we could look deeper into the different types of throwing and unpack the action. When we look deeper into it we have to find an interesting movement to analyze and use and maybe some research can be done.

We quickly jumped back to the action of throwing after this, sticking with our gut feeling and initial idea etc. What I think we should do now is really sit down and break down the visuals of the action of throwing, by drawing out the join and limb displacement and see if we can find anything particularly interesting. Felix found a slow motion video of a professional pitcher pitching a ball, the video is informative because the background is dark and we can really see everything in action in slow motion. Next time we meet we’ll be analyzing this video.

Ideation Friday + Baseball!

After a series of ideation starting from listing a couple of specific activities consisted of rich gestures and concrete movements, we started leaning towards sports— ball sports, in particular. The most gestural ball sport you can think of is undoubtedly baseball. From the third base coach observing the overall scene and letting the catcher know, to the catcher signaling to the pitcher what type of ball he should pitch, to the umpire doing umpire-things…and so on. Not to mention that sports spectators often times have endless amounts of gestures to cheer on their favorite team.

When speaking of baseball, what struck out immediately if of course the pitching/throwing movement. When pitching a baseball, in addition to simply throwing the ball towards the home plate, a lot of thought has to be put into each pitch in order to support throwing a variety of pitches. Depending on your hand position, wrist position and angle of your arm, each ball will have a slightly different velocity, trajectory, and overall movement. Pitching the right ball at the right moment because it confuses the batter in various ways, gets batters and baserunners out, which contributes to the overall succession of the game.

Image result for different baseball pitches

Following this quick ideation we immediately moved on to exploring and tinkering with the machine learning code. We first recorded gestures in the terminal for a normal fastball, curveball, screwball, and throws we invented such as cowboy (elbow up, rotating your arm twice before throwing the ball), and frisbee (throwing the ball like you throw a frisbee). We recorded each gesture20 times, training them, then predicting them to see if machine learning can differentiate these different pitches. On my end I tried the screwball (inverted curveball), the frisbee, and the cowboy, and because these gestures are vastly different, machine learning was able to predict these motions fairly well.

Machine learning prediction

On the other hand, Felix trained more similar range of motion such as the curveball and fastball, and that confused the system a little more, machine learning would often confuse all of these gestures as either just the curveball or just the fastball. It’s true that the motion is rather similar, though one has more wrist angle manipulation to achieve the “spinning” and the other with a quicker pitching velocity. We speculate that maybe it’s essential to make sure that when recording, we have to be careful with emphasizing the “spin” as well as the quicker motion for the fastball (less coordinate points), purposely making both pitches extremely unique from one another.

It’s extremely refreshing to see how powerful a simple machine learning code can be, even when it’s as simple as predicting gestures. There’s always a satisfying gush or a “I’m impressed” feeling whenever the right gesture is predicted. I look forward to working more with this and we talked about changing the number of lines read in the training and prediction code and perhaps recording and training ten additional lines to see if the predictions can be improved.

Lecture, followed by some drawing and some YMCA…

The body is the ultimate test of successful engagement with interactive systems as it adds more liveliness, vitality and pleasure into the design process.

Lecture

Jens’s lecture consists of breaking down the two papers by Loke & Robertson— Study of Dancers and Labanotation for Design of Movement- Based Interaction. The main point of the dancer is acknowledging the three perspectives of exploring movement. Before introducing the three perspectives, the concept of Kinaesthetics is introduced. Kinaesthetics, drawing from greek language origins, essentially means the aesthetics of movement. It relates to a person’s awareness of the position and movement of the parts of the body by means of sensory domains (Oxford).

The Mover belongs to the kinaesthetic perspective— it belongs to the sensory proprioceptors. It’s the first person— the one doing the movement’s narrative. It’s the movement itself and tend to be more “difficult” to articulate.

The Observer, to be simply put, is the person observing the movement. The observer does not feel what the mover is feeling. However, based on previous knowledge, the observer can make their own interpretation of what the action entails.

The Machine perceives the movement in the observer’s perspective, but it’s different. It views the body via coordinates or inputs— or whatever you’ve taught it to learn from the action.

Image result for hawkeye image tennis

The first image that came to mind when discussing the three perspectives was a tennis match, or basically any sports game consists of players and an audience. The players themselves would belong to the mover perspective, the observer being the audience, and the machine would be the hawk-eye system that tracks the trajectory of the balls played and displays a profile of its statistically most likely path as a moving image, or collects data of the player’s movements.

The second paper about Labanotation was basically a notation that Loke & Robertson have created to reproduce movements. They have created a new set of vocabulayr to help us pay attention to specific details in movement, or to better identify movement and be able to retell it. This will definitely come in handy when we choose our “gesture” to analyze.

He ends the presentation with the usual “process” graph— and he emphasized that for this third module we will be tinkering more than implementing (mentioned in previous entry).

Again, M3 is all about challenging convention about gestures and seeing how machine learning can be implemented in any way.

Set-up

After going through a series of issues for maybe three days just trying to get the installation to work, and three versions of the code later, we finally got the program to work avoiding any errors.

For the tinkering phase of our design process, we’re supposed to really delve into and explore the capabilities of our machine learning code. The code is capable of recording gestures based on the data coordinates drawn from the movement sensors (accelerometer and gyroscope) in our smartphones.

For the entirety of this afternoon we devoted into recording different gestures 20 times, training them, and seeing if the program can further predict what gestures we’re doing.

We noticed that shapes that are vastly different tend to not confuse the computer so much– for example, differentiating between a circle, a triangle, and the movement of the uppercut (yes, I really did this). The computer was able to differentiate these very well and predict each gesture at ease, however, once i incorporated more complexity, such as incorporating a square gesture and a basketball gesture, the computer got slightly bit confused, in the beginning, it would confuse the basketball gesture with the uppercut, the square with the triangle, but eventually, it just predicted everything as a circle. We don’t really know what this entails, and how to improve the prediction, which is perhaps training an extra 10 more recordings, but we will proceed that in a later on iteration.

Then we removed all our present data and played with something new, instead of shapes, I wanted to see if the computer could recognize alphabets that I’m trying to spell out. Well, first I thought of the YMCA dance, because it’s an extremely “gestural” dance, but i figured just doing the dance the way it is isn’t so “rich” in movement, and I was certain the program wouldn’t be able to decode what alphabet I’m gesturing just by “pointing” in different directions (when you observe the YMCA dance, you’re angling your arms a certain way for each alphabet to spell it out, not so rich in movement…).

So I decided to write the alphabets out (like, literally WRITE it), and see what the computer predicts because it would be pretty cool to spell things on the computer just by using gestures etc. Unfortunately it failed miserably as practically none of the right alphabets were predicted. Again, we didn’t really delve into why this is happening other than acknowledging that the code is obviously not perfect and cannot be 100% accurate.

Reflection:

Machine learning, a subset of AI, we can train a certain code with enough data for it to be able to predict specific movements without explicit instructions. But what does this all mean? After the tinkering we’ve been doing, it’s time to step back and ideate what gestures we find particularly interesting and analyzing that, finding design opportunities perhaps, and how we can utilize this machine learning technology to a greater extent.

This all sounds exciting but I guess the limitation of the module is the phone itself. At the end of the day, the phone is quite chunky and it’s one single object, we cannot attach it on multiple joints for it to recognize “richer” gestures, we cannot attach it all five of our fingers so it knows how many fingers are being held up. Therefore, as a prediction, I feel like this module could potentially be quite challenging. I guess we shouldn’t be thinking about implementation just yet and focus on finding an interesting gesture as part of an activity to kick-off.

MIII Kick-off!

MIII is all about working with preexisting machine learning code and taking a dip into the realm of AI!

My partner for this module is Felix!

Similar to MI, we worked with pre-trained models for visual recognition (posenet). In this module, we will be building our own machine learning models based on the data we register into the program using movement sensors in our smartphones! The goal of this module will be to gather knowledge of what Machine learning is and to explore the possible usefulness and difficulties of Machine learning as a material of design for IxD. In general, we’re getting an understanding of machine learning in the broader context of AI.

Design process:

Starting with tinkering (which Jens said we’ll be doing more for this Module compared to previous Module, since there will be more focus on making the technology work so tinkering > implementation), ideating / sketching as usual, implementing, then experiencing and reflecting, the usual for a design process.

MII Wrap-Up

The main feedback we received from the critique after our showcase is mainly what value we’re getting from this feedback we’re trying to express. In our minds, the tightening sensation was originally meant to emit a suffocating feeling to reflect the levels of air pollution to the body, as a way to inform the user about the “state” of the atmosphere, which we later diverged into being information about anything basically. Like the suffocation could inform the user about anything you would like to be: the amount of homework you have on your plate, how much food you have left in your fridge, and so on.

When Jens posing as our lab-rat experienced with our prototype, he describes the feeling as being rather repetitive, as the servo kept doing the same thing, same motion. Though dimensions were added in terms of speed and amplitude, the motion is still rather repetitive, which we agree with.

He recommended (if there were more opportunities with this prototype / idea) to add more variations, make it more expressive (see Martin & Michael’s prototype), because the movement of the choker became a little monotone over time despite the initial reaction towards the choker being rather “shocking” and a “new sensation”.

As a general reaction, I would like to discuss how we felt when experiencing our prototype and how we manipulated it over time, moved on, etcetera. In our minds, having such a prototype that attaches itself to any spot on the body could be relevant in people’s lives in many ways, like I’ve discussed in other posts. In addition to our initial air pollution idea, other values can factor in and the choker can be about ANYTHING, basically, as we concluded after coaching with Clint. But what we could’ve done better is creating more richness in the experience, such as really ideate over what sort of dimensions we’re doing the experiment in. It’s important to differentiate between technical dimensions and experiential qualities. This Module was all about new experiences after all, so I agree that we could’ve pushed this project farther. Jens and Clint advised adding more color for example, such as sensations of pinching to contrast with our pre-existing tightening sensation. As designers designing new experiences, it’s important to come up with our own experiential vocabulary, really go in depth with how everything feels when you’re interacting with a certain artifact, step back developing these feelings and seeing if there’s an opportunity to create more richness. This factor of really exploring the experiential qualities more during the design process will be something I’ll be paying more attention to in MIII.

Other General Feedback (Maybe incorporate in final paper):

  • Way of informing people can change a lot, depending on how the project is introduced
  • Worth reflecting conscious turns you took in design process, starting from concrete then diverging, opening to other kinds of phenomenon
  • Reflect constantly on whether it’s worth to focus (narrowing down) or opening the door again
  • Always reflect on how you went about it differently from last time and how you grappled with it
  • Tell a story, what are the parameters you’re experimenting with, can you articulate the experience, why move on, manipulated what, what dimensions
  • What are you experimenting with in terms of experience

Show N’ Tell (MII)

As a member of Group 8 I will be jotting down notes about the presentations from the groups in cluster one but also attending cluster two for photos and future inspiration.

Group 1: Michael & Martin

The project aimed to express feedback in some sort of flow, after taking inspiration from multiple kinds of “flow”, they decided to use this prototype to unpack flow from traffic. Like people coming in and out of a queue, cars stopping and going, etc. Very tactile feeling apparently sensed from the body, but tactile feeling can recede into a larger meaning, that meaning came after.


Group 2: Thanita & Snezhana

3D artifact represents activity on keyboard. Backspace reversing the spinning movement, Enter button changing position. When typing moving is simply continuous. There’s something interesting with writing and rhythm! Like a metronome.


Group 3: Liam & Jesper

Inspired by Biking and Navigating on a bike, they strived to achieve getting feedback with servos to that concrete phenomenon. Experimenting with vibrations, they wanted to experiment how vibrations should/could navigate you- stimulating through vibration.


Group 4: Therese & Zakiya

Started by exploring using subtle head accessories to express feedback, but landed with being on the arm. Inspired by heart surgeons that have to monitor action, for example, entering a dangerous zone that requires extra and more intricate care. Working a lot with feel and sensations.


Group 5: Bahr & Caleb

Sticking hand into globe that has fuzzy feathers inside it, experience fuzzy and “enjoyable” feeling while servo moves. Giving concrete texture that came in and out of state. Connected to mood.


Group 6: Nefeli & Malin

Inspired by the level tool, feedback from level tool, material, how does it feel to balance and center something. The structure is built playfully like a game, with buttons incorporated for input and tight coupling with the output which is the movement of the strings tying to the tube which tilts, moving the ball within left and right.


Group 9: Kim & Melika

Experimenting with “readiness” and more temporal elements. This drove them to the idea of not being so restricted to time, but more about something approaching and leaving. Their artifact experiments with readiness, instead of utilizing the rotational movements that servos afford doing they used them to showcase more linear motion & velocity.


Group 10: Josefine & Victor

Expressing feedback with pulse. Like an external mechanical heart you attach on your arm. Sensation provided is that it kind of like an exaggerated sensation of your own pulse, pulse blends in with artifact. To add nuance they experimented with Synth (ADSR). Experimented with how to make the muscle contraction, heartbeat feel realistic.


Works from Cluster 2!

To see entry on our own group’s critique & feedback, as well as additional final reflections, see the next entry.