MI Finale

This journal entry consists of the final outcome of our MI project.

My partner for this Module was Denisa.

The final version of our code consists of interaction with the computer (expression) through free gestures by touching shoulders and wrist, and an ambient impression of sound back to the user.

Here’s a guide through of the portions of code that we have modified:

First, we implemented two different audio files, one that’ll be the output for one interaction and the other for the latter.

Then we started by experimenting with just the shoulders first. Taking the positions of two shoulders from two people, we calculate the distance between and as the distance gets smaller, the interconnected shoulderDiff value lowers as well which then manipulates the audio volume because the audio volume is controlled by shoulderDiff / 300 (so that value remains between 0 to 1).

Some commentary so you can see what’s going on
Getting key points for left and right shoulder of two people, targeting index 0 and 1, two people. Then we take the shoulder difference between both the outside shoulders and the insider shoulders, then named the shoulderDiff the lower value of them two.
Audio volume is adjusted based on the value for shoulderDiff, in the beginning we tried by simply adjusting the audio volume in increments like 1, 0.5, then 0 for audio to turn off) but later decided to work with the volume by gradually softening and loudening it. We thought this would make the interaction and impression more fluid to prevent a stepwise output.
Same thing for shoulders repeated but with the wrists instead, with this if statement, when the wrist difference is extremely small (<50) we tested this and it’s basically when to wrists are touching, a snare drum sound is revealed

At first, we had trouble understanding how targeting an index for the poses worked (poses represents person detected I assumed). After reading it a little more, we understood that we had to follow the parameters in getKeypointPos( ) to properly place the indexes behind, like getKeypointPos(poses, ‘leftShoulder’, 2);.

getKeypointPos()

With this final outcome, we were aiming for both surface free expression and impression modalities. We quickly realized that having a screen to show feedback at all binds the impression to a surface so we removed the camera feed,

and replaced the screen with a simple gif that plays while everything goes on, not interconnected with bodily movement or anything that’s going on behind the code at all.

Our UI that’s not connected to the code’s core functionality at all.
Testing if the distance works and if the audio is gradually softening.

MI Phase Two (W3)

I refer to the last week of the first module as “phase two” for we did some backtracking and reevaluation of the literature and started fresh on a new sketch during this week.

Similar to the first sketch we worked with (emotion), it was challenging deciding how to approach and tackle the code. Personally I felt like the code was way more straightforward to understand, and after reading about the library via the github repo from the original creator of posenet.js, I had a clearer grasp of how things work. But to not decide on a concept and work towards a destination, it remained quite challenging.

Denisa and I decided to work with the shoulders and wrists as the main gestural surface-free expressive modality. When it came to impression (or the output of the program) it was difficult to settle with anything as of yet. In the end it came down to using an audio output again, but perhaps implementing some sort of twist so that there isn’t a visual feedback on the screen for view of the users.

According to our interpretation, audio is simpler to work with, and it caters towards a more fluid interaction. If an audio softens gradually (instead of a stepwise output), we believe it leans closer towards a fluid, continuous ripple-effect type of response, which is what faceless interaction demands.

Coaching Thursday 12 September Aftermaths

Today we had a coaching session with Jens and Jens provided us a more solid understanding of the material and what we’re supposed to do.

He understands that many groups including us is struggling with a way to work and how the task should be interpreted. It’s difficult to work something without having a particular purpose.

That’s why we have to be somewhat of an “idea”, not necessarily a solid concept to work towards, and create small experiments around it. Really define what fields of interaction means, think more environmentally, not just one-on-one interactions with a computer, and think past that!

Tinkering!!!

Is the key to this module, and course (maybe)! We’re improving in a casual way, not setting a goal and working towards it. It’s important to get the framework under our skin, or at least have a solid understanding or framing of the topic. Develop experiments and define what it is that we want to experiment with, if there are nuances behind it, what’s the point?

Ask these questions (And incorporate into the journal):

How does it feel like to use it?

What is the experiment? the experience?

Did i misunderstand the task?

Is it the right experiment?

It’s also important to do iterations when experiment isn’t working, when you have encountered a dead-end. When that happens, you must figure out how to change the experiment to further fit what you want to achieve by experimenting.

After coaching generally about the task and how to go about it, Jens further developed his take on Faceless Interaction and what it is. This actually helped me with my understanding by a whole mile.

Faceless interaction can be interpreted as interacting in a “field” more than with certain, discreet objects. The traditional form of interacting with an artifact tends to require control from a user. There is a strong link between the input and output, such as pressing a button to trigger something. This module aims to have us challenge that assumption. We should obscure that link, but also not in a direct and obvious way, instead, create a whole out of the various inputs. Breaking directness is also important, intention shouldn’t be intuitive (though I believe that contributes to a lack of precision and often times leads to confusion).

Finally, he says that in the field, inputs might be interactions between things and not specifically an interaction with a computer. The output, impression for the users, expression for the computer, has atmosphere as a metaphor.

Lastly he critiqued us on our current process, of course he still explained to us how our project so far may have faceless properties but isn’t necessarily faceless. He agrees that sound has an ambient quality to it, but it’s important to work holistically and not focus on a discreet part.

Reflections, what next?:

Now I have a better understanding of expression and impression, so I really want to expand on this concept of surface-free impression because to achieve a strong sense of facelessness a surface-free impression (output) is also required.

Denisa and I decided to move on from the current sketch we’ve been working on (emotions) and move on to a new one since we believe we’ve encountered a dead-end. The ml-posenet sketch seems to have more opportunities for the “fields” concept, as in it can potentially trigger expressions from the computer without necessarily interacting with the screen or having focus on the screen.

We thought we’d experiment first with color as an output, but later on changed our minds and settled on audio instead since perceiving color would require a visual interface which isn’t a surface-free impression modality.

Connecting dots, drawing lines (MI)

Re-reading Faceless Interaction

Throughout coaching sessions we are constantly reminded to go back to the literature by Janlert & Stolterman to really immerse ourselves in the concept of Faceless Interaction and how what we’re producing relates back to this literature. After all, the purpose of this course is to truly understand the concept of interaction and interfaces, and to challenge the status quo. We should stop focusing too much on creating the perfect piece of code, instead, explore more and reflect before moving ahead. For this entry I intend to go back to analyze the literature, going more in-depth this time to connect some dots.

The literature introduces a fifth thought style for interfaces, it being a channel of communication in addition to the other, more common thought styles being.

By stirring up this preconception of what an interface constitutes, we’re opening up new design opportunities.

The literature introduces advantages and disadvantages of general interactive artifacts by breaking it down to Complexity and Control, Interface Extension and Cluttering, Rich and Precision, finally breaking down the interface again and reintroduces Faceless Interaction and how it all fits in.

The sketch I have modified so far, doesn’t fully constitute as a strong sense of faceless Interaction, it is devoid of surface-bound expressive modalities but has a surface-bound impressive modality, which is visual feedback of a photo being taken from the camera feed. In addition to that, there is still some sort of directness when we interact with our sketches. With both Denisa and I’s sketches, you must direct attention to the camera on your laptop and behave a “certain” way to call for the output, and that may not be what faceless interaction advocates for.

How to improve?

Think about Surface-free modalities, how to incorporate any of them into the impression part of the communication channel (We’ve worked with sound, so maybe work with audio on a different sketch?).

Investigate Control and Complexity, what a does it entail when a user is given more control? Probably a more precise output is usually the case. This project gives way to Interaction complexity, with free gestures being the input that most sketches allow to take in. The question is whether interaction complexity is what’s needed in a faceless interaction since there are definitely limiting trade-offs.

Focus on the attention given, since in interaction fields focus on individual objects is discouraged. In our first sketches we still had to direct our attention to the computer screen to influence the behaviour of the output.

Free gestures as an element of expression is values because it provides endless variations and subtle nuances.

How to support precision? Not make interaction too complex? But also pretty obscure in terms of what needs to be done? How can we design our second experiment in a way that we achieve some richness but also don’t overcomplicate the process and lose the precision?

MI Update: I finally got something to work!

Today I sat down and proceeded on understanding and working with the emotion code.

I first went through all the javascript files in detail. I’ve duplicated the emotion folder and renamed it emotion copy so I wouldn’t be touching the original file at all (in case I made a mistake and didn’t know how to revert to the original code).

Understanding the script.js file:

In the global scope the constant clm.tracker and emotionClassifier are established. The clm.tracker javascript file is based on the code created by auduno (Øygard, A. M.), it is a javascript library for fitting facial models to faces in videos or images. It tracks a face and outputs coordinate positions as an array.

emotionClassifier.js, called in the script.js file, is a function that initializes the emotionmodel (Another js file in the library containing an array of our six different emotions), collects data from facial movements (bias, coefficients), and inputs the “score” (in function meanPredict).

Tweaking parameters:

meanPredict function at the bottom of emotion_classifer.js

I first tweaked around with the value “10” that’s placed after previousParameters.length ==10, I first changed this value to a higher number to see what happens (also changing the numbers below that have something to do with that first value) I noticed that raising the number smooths the results due to the mean being taken over a longer time. On the other hand, if I place a lower number (i tried ==2, > 1, and /=2), and noticed that the results became less smooth.

Tinker-time!

Next I wanted to target one specific emotion and see if I could do anything by targeting any, and see if I can tinker with these “bias” values a little bit to make the results appear more accurate (I had gotten slightly annoyed by the fact that the tracker constantly thinks that I am “surprised” simply because my eyes are open).

The “surprise” element that’s part of the emotionmodel array

I also wanted to try tweaking the “bias” value, all of the “bias” values for all the emotions were negative. When the value is positive, the result is immediately blasted up to a score close to 100, when it’s negative, but close to zero, the results are still extremely sensitive (especially when my eyes are open, the results go up pretty high, and I don’t want that) so I lowered the value so that it’s lower than the original -2.86… but also not too far from it. I picked -5.

Understanding the coefficients and “bias” in emotionmodel.js.

I did some research about these two components that are quite prevalent in the code created by auduno on gitHub. In a discussion thread, auduno explains the coefficients and “bias” as “trained parameters for a logistic regression model. A logistic regression is a predictive analysis that is used to describe data and to explain the relationship between one dependent binary variable and one or more independent variables. My interpretation of this is that it’s how much the “coefficients”, or the points in the facial model, have to move for the results to get affected? Because when I wrote the bias as a positive value it has to move a LOT to get to 90+ values while a negative or lower bias just moves the results? I’m not entirely sure!

Now I’ve grown to understand the emotionmodel.js and emotion_classfier.js files more and I thought it would be time to begin tweaking values and perhaps adding a function or two in the script.js file!

After understanding the camera functions, frame rendering and tracker functions, I proceeded to find things I could tweak in the code. I settled on the updateData function, where the data collected from the classifier using the coefficients are simplified and transformed into integers and displayed as scores by * 100.

As you can see, in the code, the er parameter is the value of meanPredict from the classifier file, the value is taken and documented as an integer that resembles a score. Then, using html elements within the javascript file, we show the results on the canvas and incorporate the “highlighting” appearance for the emotions with a higher resulting value.

Using the camera, as a camera!

When one thinks of smiling, and a camera, the one action that seems most suitable is to take a photo. That’s why I thought I could somehow tweak around and incorporate the camera capture functions from the simple tutorial in Clint’s sample.

As I mentioned earlier, I wanted to see if I could target one specific emotion and do something about that. That turned out to be easier than I thought it would. I started by using console.log whenever the camera stream detected a smile.

The “happy” element was the fifth one in the array, so I simply replaced “i” with 5, so that “happy” meanPredict value would be targeted. The “> 0.8” value is that way because it hasn’t been simplified into an integer yet, I made a mistake with that later which I’ll write more about later. So in this case, when the result is greater that 80 (score on screen), I made it so that the console would log “I love coding”. It worked, but because there was no pause in the camera stream, it kept logging non-stop without pause whenever the value for “happy” was greater than 80. That’s when I thought of incorporating a pause, which I named as camWait, and equalled it to a boolean.

I made the camWait false in the global scope, I used a boolean because I only needed one or two possible values. So when the happy score is greater than 80 and camWait also happens equals to false the camWait actually turns true. This ensures that results won’t be constantly documented but checks if both parameters are true.

Underneath this if conditional I placed another one where the cameraWait turns false again when the value is less than 30 (here you noticed I’ve used an integer because the value had already been simplified by * 100 (I made a mistake of still having a value lower than 1 and nothing showed up on canvas! This actually took me a while to understand and solve).

Now it’s time to incorporate the camera! Taking the code from the simple camera tutorial,

where the captureToCanvas(); function is only called when a button is pressed through document.getElementById, is switched it so that the entire function is simply called when it’s called within updateData:

Then, I simple added the relevant html elements into the html file:

And also changed the css under #captureCanvas:

Now it seems like everything’s set and works like I want it to, so far.

A snapshot is captured to canvas whenever I smile and the happy score surpasses 80.

What to do next?

I would like to try doing something with an audio output! It’s nice to accompany visual feedback with audio feedback. Perhaps I’ll try incorporating it into the same updateData function and see how that goes! Or making nice touch-ups to make seem like a cleaner and more complete prototype.

How it’s been like so far: I feel like I’m off to a good start! I finally got something to work, especially because I have been having a difficult time comprehending the pieces of code. Now that I feel like I understand parts of it, I could further tinker with them. I’m also thinking about playing with a different tutorial, perhaps one that utilizes colour and RGBA bytes.

The Aesthetics of Interaction

Exploring relationships between interaction attributes and experience.

Aesthetics when spoken shallowly about in modern media often refers to something appearing to be “pretty”, “neat”, “orderly”, or simply concerned with beauty or the appreciation of it. In popular speech and writing, an aesthetic refers to the overall style of someone or something, like a musical sound, interior design look, or even a social-media presence. For example, your Instagram feed can be described as “so aesthetic!” when it’s cohesive and visually appealing in terms of color-coordination. Your room can be described so too when it’s neat and properly decorated with vintage furniture and other goods you purchased from your nearby antique store.

Related image

But clearly it exceeds that shallow meaning and can be considered in Interaction Design as well. Lenz et al.’s paper introduces the concept of the “Aesthetics of Interaction”. They claim that besides the visual aesthetics of a product, the interaction itself’s aesthetics can be taken into consideration as well. They focused on approaches that feature “qualities”, “dimensions” or “parameters” to describe interaction.

Clint’s Lecture on Interaction Aesthetics:

For Interaction Designers, it’s important to place emphasis on how something “feels” like to use. The “feeling” of the interaction can also be considered an “aesthetic”, whether it looks beautiful or not doesn’t really matter when it doesn’t feel beautiful.

This is !important because users value experience, and our current field of study attends to form, function, and behavior more so than to the visual appearance (of course, that’s also taken into consideration too!). Therefore, we must pay attention to interactivity.

Attributes (i.e approaches) as explained by Lenz et al. is a “Language or framework for deconstructing interaction” (also Lim et al., 2011). These attributes are used to assess the experiential quality of interaction. For analysis, they first identify these attributes and compile them into a list. Then they derived potential categories to further cluster these attributes.

These attributes are assessed as dimensions, or a pair of extremes. Like so:

Heyer, C., Interaction Aesthetics. 2019.

They also utilize a framework they refer to as “Interaction vocabulary”.

Nielsen, T. B., 2017
Definitions > Heyer, C., 2019

Through a survey of 19 publications, and pooling of 151 attributes, they categorized these attributes into two different approaches, “Be-level” (eg. Stimulation, Security, Competence, etc.), which relates to psychological needs, and “Motor-level” (eg. Temporal, Spatial, Action-reaction, etc.), which concerns engagement and physical movements.

Final Thoughts

Aesthetics when considered in the field Interaction Design concerns sensuous perception. It’s important to think about because nowadays shallow perceptions place more emphasis on the visual appearance of an object, like whether it looks beautiful or not. However, we shouldn’t be discriminating the aspects of interactive experience because a smooth and consistent interaction is also of great significance!

Smart? Or extremely dumb and difficult to use?!

These so-called Attributes (Lenz et al.) are a great way to measure the qualitative dimensions of experience, and they helpfully organizes these attempts for us! It’ll definitely come in handy when designing in the future, especially during user-testings to fill out user experiences and such. It’s a good way to jot down and get a grip on how to improve on a certain design.

Updated Progress on MI

And why the struggle is real…

9 September 2019

I’ve been having some trouble with this project, mostly because I am typically used to setting a goal before kicking off, and working towards it. I appreciate the concept of tinkering and I understand the importance of it, but without a set goal, I lose my motivation a little, since I have no specific goal to work towards.

Denisa and I illustrated some concepts we wanted to work towards but that was immediately called-out by Clint, yes, during the kick-off Clint specifically stated that we should not present concepts before beginning. But that’s the main challenge for me. I. need. a. concept. to. work. with!

So what I’ve done so far (over the weekend) is: looked at all the samples provided, played around with it such as changing some values and seeing the results on canvas (It’s crazy how changing one single seemingly-insignificant value breaks the entire code down, haha).

I’m extremely drawn towards the emotions sample in the file clmtrackr, I went and looked through the gitHub repo of the original creator of clmtrackr. That helped quite a lot, as I began to understand how the facial model worked. auduno’s clmtrackr also presented some examples on using the face tracker so I also experimented around with those. He also had an emotion tracker sample.

Auduno’s Emotion Detection sample (here)

I strove to further understand the code under camera-clmtrackr-emotions, including the files in the library like emotion_classifier.js and emotionmodel.js. I wanted to understand how the different emotions are documented and translated into the score that’s presented on the top left, like the “coefficients” and “bias” stuff seen in emotionmodel.js. And I wanted to play with that, like tweaking the bias, see what happens when it is a positive number or a lower negative value!

Perhaps I could try playing with two different codes and incorporate one into the other? Denisa and I discussed about audio outputs so maybe figure out how that would work into the code, etcetera.

camera > clmtrackr > emotion

In-class Discussion of Janlert & Stolterman Paper

During the lecture on the 6th of September (today), Clint initiated conversation about the recently assigned literature: Janlert & Stolterman’s paper on Faceless Interaction. The discussion is laid out and divided into five sections/groups, where each division receives one aspect to discuss about the paper: Premise, Basis, Methods&Tools, Contributions, and Conclusion.

This entry is divided into five parts that represents the five aspects discussed. The questions posed by Clint are in italics. The rest of the information is a compilation of what the students shared as well as Clint’s final thoughts.

Premise

What is the status quo (what does the paper suggest)? What are the author’s goal(s)?What is the contribution they want to make? What are the controversies, tensions in the topic? What’s the urgency and relevancy? 

Challenging the idea of having a surface as an interface (also direct interaction), Introduces faceless/surfaceless interaction. Thought styles and how interface as a surface can be challenged.

Basis

Reasoning? Empirical? Design experiments, projects, fieldwork, compounded experience? First-hand? Second-hand? Third?

Basis is about reasoning, what they’re drawing and how convinced they are.

They appear to be convinced but also seemed like they didn’t have valid research to back everything they’re saying up. They throw a lot of things to the table but doesn’t provide enough research. A lot of theorizing. Makes points but doesn’t expand.

Clint’s notes:

  • HCI is about interfaces and their design
  • HCI hasn’t properly unpacked what is meant by interface
  • Technological developments challenge the classic view of an interface (>concrete) and artifacts are getting more complex

-Disconnect-

  • Field needs better unpacking of one of its core concepts, interface
  • Because interaction & interactivity would seem to require an interface
  • IXDers design interfaces
  • Interactive things and systems have interfaces
  • A better understanding of the notion of interface can enrich and inspire design

Methods & Tools

Which theories and methods are deployed? Are we convinced of their appropriateness and value? Do they pay off, and how? Are there ‘analytical blinkers?

Rather argumentative, “This will happen in the future” type of attitude. Being very definitive, Used relatable examples.

The thought style is a theory of itself: “Determines the formulation of every concept, not depending on the definition but the collective socially influenced way of thinking which they use to explain the interface. 

They use history and predictions about the future to convince and create depth to their arguments. They also tend to use absolute words to make their arguments authoritative. They also use relatable examples to create a personal connection to the arguments. The theories are interconnected which leads to different interpretations. But the different interpretation can create value because it makes us think/explore the theory of interfaces/interaction. 

They present their methods in an isolated matter. They don’t compare to other theories about the same subject, using “non related” examples from history and such. 

Contributions

For whom? Challenging convention? Problematising convention? Drawing together, tracing a path? Transferral: domain a to domain b? Abstract <> Concrete?

Designers, Researchers, anybody in the field that might be more constrained into thinking that interaction relies on an interface. “The potential of faceless interaction, interaction that transcends traditional reliance on surfaces”

In the beginning, challenging what an interface is, what a tree is what a car is

“This article aspires to challenge HCI research to investigate some of the basic and
often tacit assumptions about interaction and specifically about interfaces. “

Providing a new language, not so much providing new design opportunities.

Conclusion

What are the claimed and actual contributions? Did the authors meet their own goal? Are the conclusions adequately supported? Is there unrealized value in the paper?

Claimed contributions: 

  1. challenge HCI to research more in depth, exploring the richness and complexity about the term of an interface, 
  2. opening discussion about the complexity and richness of the topic, 
  3. introducing a fifth thought style: The force field that is a channel of communication in which stuff pass through,
  4. introducing the notion of faceless interaction, 
  5. abstract(metaphorical) extension of the 3rd and 4th thought style
  6. no guidelines or principles were outlined, but the text can sustain this,
  7. the text can be used from both a theoretical point of view and both from a practical design perspective. 

Actual contributions:

Yes, the authors challenge HCI paradigm about what interfaces are/have been by breaking down whats been done before and including different thought styles. Also adding other genres of design.
The authors explored various concepts of interfaces such as: complexity and control, forms of complexity.

Clint’s notes:

  • We introduced a number of concepts
  • These concepts open up a particular kind of analysis
  • The four thought styles disentagled several aspects and forms of IX not commonly examined
  • Made clear that most interfaces today are really abstract surfaces
  • Faceless interaction made it possible to further investigate, and suggest a fifth thought style
  • We are convinced design implication are somewhere inside… we are convinced of its value to the field, both theory and practice

This discussion was structured and helpful and further solidified my understanding of this newly “introduced” concept of faceless interaction. In conclusion, the paper triggers thought for the actual definition of an interaction and challenges the status quo of HCI claiming that interaction involves an interface.

In addition to that, this type of discussion (opening up conversations, breaking down long paper) that unravels the author’s writing style and points opens up new thoughts, challenges previous opinions of the paper. This break-down also taught me to be more critical with any sort of literature. This could perhaps strengthen my own writing skills.

Interaction-Driven Design

Interaction-Driven Design is a concept introduced in Maeng et al.’s paper on it being a new approach for Interactive Product Development.

In the paper, accompanied by Clint’s lecture on the same topic, Interaction-Driven design is introduced as the authors found possibilities in interactions themselves manifested as the starting point of product development. The authors focused on movements during interaction (which was later debunked by Clint in saying that interaction should not be constrained by movement only), input behaviors, feedback movements, and so on.

To begin with, this paper has narrow constraints, such as what I mentioned above when the author’s take on interaction is mainly based on movement.

Interaction for Maeng et al. is

– Described as movement

– Not relating to function since we can talk about the interactivity and the function of something separately

– What the person does, the possibility for the system to respond 

– Independent of particular manifestation 

– Interaction as conversation: Narrow constraint because if you’re interacting with a pencil, that’s not really a conversation? It relies on bodily skills and can be very performative

Drives

In Maeng et al.’s paper, they discuss design patterns and their characteristics for three product development approaches including the one they newly introduce, and examined each through a series of workshops; User-driven, Technology-driven, and finally Interaction-driven.

– User-driven

User-driven product development starts with the people, it’s targeted towards the user. It’s essentially the Human-Centered design process that is widely used in design practice. (eg. Designing for dishwasher mechanics? What’s the practice? How is it like? Doing qualitative fieldwork to study)

H, Clint.’s presentation on Interaction-Driven Design

– Technology-driven

Technology-driven product development usually begins with a given widget, as designers and technologists work together to find a purpose for it, find a role for it in the world Given a widget, find a role for it in the world. 

H, Clint.’s presentation on Interaction-Driven Design

– Interaction-Driven Design

Interaction-Driven product development on the other hand, according to Maeng et al.’s research, begins with movement, such as being inspired by human or animal movement, characterful aspects (eg. Inspired by how a barista works with coffee).

In addition to movement the development or design process can also start with personality (eg. Anxious insecure self-pitying, ruthless suspicious uncooperative, sociable fun-loving affectionate) Like what would the interaction be like if it’s in “this” personality? Could be a useful point of departure.

Lastly, starting with emotion, such as, what would a depressive interactive artifact be like? Focusing on how it behaves during interaction.

H, Clint.’s presentation on Interaction-Driven Design

Putting it into practice: 

-Are the four initial steps constructive constraints?

-What challenges are faced?

-How might this work in design practice?

-Is it useful beyond movement?


This paper and concept is extremely valid for us as interaction designers and is also relevant for the MI project going on right now. Interaction-driven design advocates for using a specific interaction as a starting point to work with. Much like how the camera and the interaction with a camera is the core of this project, slowly working our way around to discover the meaning of this interaction and tinkering with the code provided, asking questions along the way, is more important than simply setting a goal and working towards it.

MI Progress So Far…

3 September 2019

Some ideas so far:

  • Detecting surrounding, for example, if person behind, trigger alarm. Helpful for security.
  • Detecting distance, for example, surrounding people too close, trigger something. Face too close, shut off entire window, proximity warning, pop-up warning, etcetera.
  • Detecting a smile, for example, smart home appliance that requests you to smile to stay content throughout day and snaps a shot whenever “happy” emotion coordinates are met

Start playing around with the code, tinkering, work with the ideas we proposed, further solidify our ideas.

5 September 2019

Some ideas so far:

– Detecting distance to the screen

+ If the face is too close, computer does something to warn user

+ Warning could come in different forms, like the window displaying pop-up warning, playing a warning musical tone, voice message, shutting the tab down, flashing warnings, etc

+ Can be catered toward parental control over children’s use of technology also for those who work a lot with monitors

– Automatically having  a picture taken after camera detects a smile

+ Once the camera detects a smile (utilizing the “emotion” code) and the value for happy surpasses a certain number the camera captures a screenshot to canvas, also display a positive message “You look great today!” 

+ Maybe if camera detects a frown (sad value surpassing a number) it also captures a photo and displays a motivating message to encourage you to smile

+ Could be utilized as a home smart-gadget or a system for emotional well-being

– Smart “Glasses” for Blind users to be safer in traffic 

+ Detects green, red, yellow colors through camera and plays different musical tones for each to signal user whether it’s safe to walk or not 

+ Catered towards blind or color-blind users

After coaching:

Clint says that we should be working with things in a business perspective without qualitative research. We should not be having solid concepts. All we should be doing is tinkering and playing with code. Exploring, questioning, and trying things out. Bracket everything, not question who the user but creating some sort of an atmosphere, movement.