What framing of the topic did you arrive at?
We experimented with audio for the most part as a surface-free impressive modality. Audio is a surface-free modality (part of hearing and sound production), we chose this output because it does not require a target surface to work though there’s still a “source” in the form of a physically located object (our computer screen). In our project we relieved the user of surface-bound impression, so what the users receives as an impression is more ambiguous and obscured. Only thing that’s inflicted on the user is music playing, which is a showcase of expression from the computer’s end creating an ambient character.
We realized at the end that maybe we’re still not following a strong sense of faceless interaction, as users are still actively interacting WITH particular (the camera. However, we believe that we’ve managed to guide minor perturbations by constant movements (free gestures) in our environment and moving in this so called “interactional force-field”.
In the second iteration of this module we wee careful with not utilizing one-on-one interaction (which is a mistake we made in the first codes) and to interact with another in this situational force-field.
Finally, the goal in our last experiment was also to experiment in a way that we achieve some richness but also don’t overcomplicate the process and lose the precision.
What do you understand about it now? (What is this understanding based on?)
We understand faceless interaction both in a weak sense and a strong sense and how each has its own advantages and disadvantages. To achieve full facelessness the risk is significantly decreasing precision, though it remains rich in complexity. Interaction complexity however is not exactly intuitive as you’re stripped away of full-control and full grasp of the situation
How did you arrive at the point you did? How did your work mature over the period?
We conducted direct experimentation for this type of interaction via live code. Along the way, we became more mature after trying first sketches, we grew to be more willing to take risks, moving on when we saw a dead-end. It was much easier to start the second tutorial than the first.
We experimented with some but not all features to achieve a fully “faceless” interaction. First sketch we worked with (emotions) worked with one-on-one interaction with the screen. We quickly discovered that this wasn’t faceless as the control was still directed. Then we thought to ourselves: Facelessness might be a disadvantage in our project? Because users remained conceived as interacting with particular.
Why did it feel like a dead-end (if it did?)?
First sample we did felt more like a dead-end, we took the emotions tutorial but managed to “do” things with it but it never arrived to being a properly faceless interaction.
It felt like a dead-end because we felt like it was time to move on to one of the machine learning samples instead because we felt like that had more opportunities. The project we have now however, we feel like has endless opportunities, we can experiment with more complex interactions and also more eventful outputs (impressions).
During Show n’ Tell:
What did I learn from show’n tell? I learned that most people went through similar processes, interpreted the project first as something and then shifting gears into something more complex after understanding more about faceless interaction. It was nice that we were able to learn from everybody, see how they interrupted the topic and how they spoke about it, because everyone had slightly different ways of describing it, all making sense in their own way.
Some summarizing words: Clint and Jens asked if we were able to design by thinking of it as a whole rather just participating too much on the complexity of the input. Tinkering was a new way of going about code and it was nice because we were able to get a sense of the material. Experimentation instead of setting a goal was nice because we were sketching to explore possibilities of the material.
